I0511 20:42:10.304884 6 test_context.go:419] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0511 20:42:10.386487 6 e2e.go:109] Starting e2e run "06fb8866-34ad-4a7a-a109-89878b41b6c2" on Ginkgo node 1 {"msg":"Test Suite starting","total":278,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1589229729 - Will randomize all specs Will run 278 of 4842 specs May 11 20:42:10.451: INFO: >>> kubeConfig: /root/.kube/config May 11 20:42:10.453: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable May 11 20:42:10.472: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 11 20:42:10.500: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 11 20:42:10.500: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. May 11 20:42:10.500: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start May 11 20:42:10.508: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) May 11 20:42:10.508: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) May 11 20:42:10.508: INFO: e2e test version: v1.17.4 May 11 20:42:10.509: INFO: kube-apiserver version: v1.17.2 May 11 20:42:10.509: INFO: >>> kubeConfig: /root/.kube/config May 11 20:42:10.513: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 20:42:10.513: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services May 11 20:42:11.090: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 20:42:11.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5615" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 •{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":278,"completed":1,"skipped":27,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 20:42:11.426: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test hostPath mode May 11 20:42:11.921: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-3695" to be "success or failure" May 11 20:42:12.631: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 710.287544ms May 11 20:42:14.812: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.890923083s May 11 20:42:16.879: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.957549666s May 11 20:42:18.883: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.961530335s May 11 20:42:20.887: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.965581831s STEP: Saw pod success May 11 20:42:20.887: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" May 11 20:42:20.889: INFO: Trying to get logs from node jerma-worker2 pod pod-host-path-test container test-container-1: STEP: delete the pod May 11 20:42:20.966: INFO: Waiting for pod pod-host-path-test to disappear May 11 20:42:20.976: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 20:42:20.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-3695" for this suite. • [SLOW TEST:9.557 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":2,"skipped":38,"failed":0} SSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 20:42:20.984: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 11 20:42:27.046: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 20:42:28.515: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-8124" for this suite. • [SLOW TEST:7.538 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131 should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":3,"skipped":52,"failed":0} SSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 20:42:28.522: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-3d932741-6042-4a87-b8de-66f6ac81e89b STEP: Creating a pod to test consume configMaps May 11 20:42:29.590: INFO: Waiting up to 5m0s for pod "pod-configmaps-8227e7df-868f-4644-8eaf-246ce017f5ee" in namespace "configmap-9870" to be "success or failure" May 11 20:42:29.658: INFO: Pod "pod-configmaps-8227e7df-868f-4644-8eaf-246ce017f5ee": Phase="Pending", Reason="", readiness=false. Elapsed: 68.014993ms May 11 20:42:31.905: INFO: Pod "pod-configmaps-8227e7df-868f-4644-8eaf-246ce017f5ee": Phase="Pending", Reason="", readiness=false. Elapsed: 2.314116955s May 11 20:42:34.224: INFO: Pod "pod-configmaps-8227e7df-868f-4644-8eaf-246ce017f5ee": Phase="Pending", Reason="", readiness=false. Elapsed: 4.633471513s May 11 20:42:36.424: INFO: Pod "pod-configmaps-8227e7df-868f-4644-8eaf-246ce017f5ee": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.833692153s STEP: Saw pod success May 11 20:42:36.424: INFO: Pod "pod-configmaps-8227e7df-868f-4644-8eaf-246ce017f5ee" satisfied condition "success or failure" May 11 20:42:36.600: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-8227e7df-868f-4644-8eaf-246ce017f5ee container configmap-volume-test: STEP: delete the pod May 11 20:42:37.588: INFO: Waiting for pod pod-configmaps-8227e7df-868f-4644-8eaf-246ce017f5ee to disappear May 11 20:42:37.616: INFO: Pod pod-configmaps-8227e7df-868f-4644-8eaf-246ce017f5ee no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 20:42:37.616: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9870" for this suite. • [SLOW TEST:9.467 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":4,"skipped":58,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 20:42:37.990: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 11 20:42:40.194: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 20:42:40.197: INFO: Number of nodes with available pods: 0 May 11 20:42:40.197: INFO: Node jerma-worker is running more than one daemon pod May 11 20:42:41.631: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 20:42:42.081: INFO: Number of nodes with available pods: 0 May 11 20:42:42.081: INFO: Node jerma-worker is running more than one daemon pod May 11 20:42:42.288: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 20:42:42.572: INFO: Number of nodes with available pods: 0 May 11 20:42:42.572: INFO: Node jerma-worker is running more than one daemon pod May 11 20:42:43.422: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 20:42:43.923: INFO: Number of nodes with available pods: 0 May 11 20:42:43.923: INFO: Node jerma-worker is running more than one daemon pod May 11 20:42:44.319: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 20:42:44.341: INFO: Number of nodes with available pods: 0 May 11 20:42:44.341: INFO: Node jerma-worker is running more than one daemon pod May 11 20:42:45.230: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 20:42:45.234: INFO: Number of nodes with available pods: 0 May 11 20:42:45.234: INFO: Node jerma-worker is running more than one daemon pod May 11 20:42:46.409: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 20:42:46.882: INFO: Number of nodes with available pods: 0 May 11 20:42:46.882: INFO: Node jerma-worker is running more than one daemon pod May 11 20:42:47.457: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 20:42:47.460: INFO: Number of nodes with available pods: 0 May 11 20:42:47.460: INFO: Node jerma-worker is running more than one daemon pod May 11 20:42:48.203: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 20:42:48.207: INFO: Number of nodes with available pods: 2 May 11 20:42:48.207: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. May 11 20:42:48.265: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 20:42:48.276: INFO: Number of nodes with available pods: 1 May 11 20:42:48.276: INFO: Node jerma-worker2 is running more than one daemon pod May 11 20:42:49.281: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 20:42:49.284: INFO: Number of nodes with available pods: 1 May 11 20:42:49.284: INFO: Node jerma-worker2 is running more than one daemon pod May 11 20:42:50.481: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 20:42:50.558: INFO: Number of nodes with available pods: 1 May 11 20:42:50.558: INFO: Node jerma-worker2 is running more than one daemon pod May 11 20:42:51.280: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 20:42:51.283: INFO: Number of nodes with available pods: 1 May 11 20:42:51.283: INFO: Node jerma-worker2 is running more than one daemon pod May 11 20:42:52.283: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 20:42:52.286: INFO: Number of nodes with available pods: 1 May 11 20:42:52.286: INFO: Node jerma-worker2 is running more than one daemon pod May 11 20:42:53.282: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 20:42:53.285: INFO: Number of nodes with available pods: 1 May 11 20:42:53.285: INFO: Node jerma-worker2 is running more than one daemon pod May 11 20:42:54.282: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 20:42:54.285: INFO: Number of nodes with available pods: 1 May 11 20:42:54.285: INFO: Node jerma-worker2 is running more than one daemon pod May 11 20:42:55.282: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 20:42:55.285: INFO: Number of nodes with available pods: 1 May 11 20:42:55.285: INFO: Node jerma-worker2 is running more than one daemon pod May 11 20:42:56.282: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 20:42:56.286: INFO: Number of nodes with available pods: 1 May 11 20:42:56.286: INFO: Node jerma-worker2 is running more than one daemon pod May 11 20:42:57.281: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 20:42:57.285: INFO: Number of nodes with available pods: 1 May 11 20:42:57.285: INFO: Node jerma-worker2 is running more than one daemon pod May 11 20:42:58.283: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 20:42:58.285: INFO: Number of nodes with available pods: 1 May 11 20:42:58.285: INFO: Node jerma-worker2 is running more than one daemon pod May 11 20:42:59.281: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 20:42:59.284: INFO: Number of nodes with available pods: 1 May 11 20:42:59.284: INFO: Node jerma-worker2 is running more than one daemon pod May 11 20:43:00.314: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 20:43:00.317: INFO: Number of nodes with available pods: 1 May 11 20:43:00.317: INFO: Node jerma-worker2 is running more than one daemon pod May 11 20:43:01.319: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 20:43:01.323: INFO: Number of nodes with available pods: 1 May 11 20:43:01.323: INFO: Node jerma-worker2 is running more than one daemon pod May 11 20:43:02.565: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 20:43:02.983: INFO: Number of nodes with available pods: 1 May 11 20:43:02.983: INFO: Node jerma-worker2 is running more than one daemon pod May 11 20:43:03.470: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 20:43:03.516: INFO: Number of nodes with available pods: 1 May 11 20:43:03.516: INFO: Node jerma-worker2 is running more than one daemon pod May 11 20:43:04.398: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 20:43:04.402: INFO: Number of nodes with available pods: 1 May 11 20:43:04.402: INFO: Node jerma-worker2 is running more than one daemon pod May 11 20:43:05.522: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 20:43:05.524: INFO: Number of nodes with available pods: 2 May 11 20:43:05.524: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2302, will wait for the garbage collector to delete the pods May 11 20:43:05.593: INFO: Deleting DaemonSet.extensions daemon-set took: 3.91891ms May 11 20:43:05.993: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.197614ms May 11 20:43:19.752: INFO: Number of nodes with available pods: 0 May 11 20:43:19.752: INFO: Number of running nodes: 0, number of available pods: 0 May 11 20:43:19.889: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-2302/daemonsets","resourceVersion":"15345528"},"items":null} May 11 20:43:19.935: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-2302/pods","resourceVersion":"15345529"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 20:43:19.945: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-2302" for this suite. • [SLOW TEST:41.960 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":278,"completed":5,"skipped":79,"failed":0} SSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 20:43:19.950: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test substitution in container's command May 11 20:43:20.710: INFO: Waiting up to 5m0s for pod "var-expansion-6248e55f-ffa2-4cff-8531-091da04dd4a7" in namespace "var-expansion-4026" to be "success or failure" May 11 20:43:20.767: INFO: Pod "var-expansion-6248e55f-ffa2-4cff-8531-091da04dd4a7": Phase="Pending", Reason="", readiness=false. Elapsed: 56.429113ms May 11 20:43:23.426: INFO: Pod "var-expansion-6248e55f-ffa2-4cff-8531-091da04dd4a7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.715565684s May 11 20:43:26.098: INFO: Pod "var-expansion-6248e55f-ffa2-4cff-8531-091da04dd4a7": Phase="Pending", Reason="", readiness=false. Elapsed: 5.388336645s May 11 20:43:28.170: INFO: Pod "var-expansion-6248e55f-ffa2-4cff-8531-091da04dd4a7": Phase="Pending", Reason="", readiness=false. Elapsed: 7.459835376s May 11 20:43:30.930: INFO: Pod "var-expansion-6248e55f-ffa2-4cff-8531-091da04dd4a7": Phase="Pending", Reason="", readiness=false. Elapsed: 10.220060362s May 11 20:43:32.954: INFO: Pod "var-expansion-6248e55f-ffa2-4cff-8531-091da04dd4a7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.243816463s STEP: Saw pod success May 11 20:43:32.954: INFO: Pod "var-expansion-6248e55f-ffa2-4cff-8531-091da04dd4a7" satisfied condition "success or failure" May 11 20:43:33.048: INFO: Trying to get logs from node jerma-worker2 pod var-expansion-6248e55f-ffa2-4cff-8531-091da04dd4a7 container dapi-container: STEP: delete the pod May 11 20:43:34.523: INFO: Waiting for pod var-expansion-6248e55f-ffa2-4cff-8531-091da04dd4a7 to disappear May 11 20:43:34.816: INFO: Pod var-expansion-6248e55f-ffa2-4cff-8531-091da04dd4a7 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 20:43:34.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-4026" for this suite. • [SLOW TEST:14.873 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":278,"completed":6,"skipped":83,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 20:43:34.824: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-map-bd2e26a7-3a2a-4753-bcc4-7db7a189027e STEP: Creating a pod to test consume secrets May 11 20:43:37.652: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-eb00b441-1684-4b03-848c-b6bcc34295d3" in namespace "projected-6574" to be "success or failure" May 11 20:43:37.708: INFO: Pod "pod-projected-secrets-eb00b441-1684-4b03-848c-b6bcc34295d3": Phase="Pending", Reason="", readiness=false. Elapsed: 56.683361ms May 11 20:43:40.565: INFO: Pod "pod-projected-secrets-eb00b441-1684-4b03-848c-b6bcc34295d3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.913361867s May 11 20:43:42.625: INFO: Pod "pod-projected-secrets-eb00b441-1684-4b03-848c-b6bcc34295d3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.973201709s May 11 20:43:44.678: INFO: Pod "pod-projected-secrets-eb00b441-1684-4b03-848c-b6bcc34295d3": Phase="Pending", Reason="", readiness=false. Elapsed: 7.0267624s May 11 20:43:46.841: INFO: Pod "pod-projected-secrets-eb00b441-1684-4b03-848c-b6bcc34295d3": Phase="Running", Reason="", readiness=true. Elapsed: 9.189545517s May 11 20:43:49.068: INFO: Pod "pod-projected-secrets-eb00b441-1684-4b03-848c-b6bcc34295d3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.416796684s STEP: Saw pod success May 11 20:43:49.069: INFO: Pod "pod-projected-secrets-eb00b441-1684-4b03-848c-b6bcc34295d3" satisfied condition "success or failure" May 11 20:43:49.072: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-eb00b441-1684-4b03-848c-b6bcc34295d3 container projected-secret-volume-test: STEP: delete the pod May 11 20:43:50.393: INFO: Waiting for pod pod-projected-secrets-eb00b441-1684-4b03-848c-b6bcc34295d3 to disappear May 11 20:43:50.672: INFO: Pod pod-projected-secrets-eb00b441-1684-4b03-848c-b6bcc34295d3 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 20:43:50.672: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6574" for this suite. • [SLOW TEST:15.854 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":7,"skipped":94,"failed":0} [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 20:43:50.678: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 11 20:43:52.251: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR May 11 20:43:53.792: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-11T20:43:53Z generation:1 name:name1 resourceVersion:15345676 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:56de5862-e10e-428d-8c96-d199cea5fb04] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR May 11 20:44:03.852: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-11T20:44:03Z generation:1 name:name2 resourceVersion:15345713 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:c77a8722-adab-4836-b751-c968b36fb174] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR May 11 20:44:14.056: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-11T20:43:53Z generation:2 name:name1 resourceVersion:15345738 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:56de5862-e10e-428d-8c96-d199cea5fb04] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR May 11 20:44:24.130: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-11T20:44:03Z generation:2 name:name2 resourceVersion:15345760 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:c77a8722-adab-4836-b751-c968b36fb174] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR May 11 20:44:34.135: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-11T20:43:53Z generation:2 name:name1 resourceVersion:15345785 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:56de5862-e10e-428d-8c96-d199cea5fb04] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR May 11 20:44:44.550: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-11T20:44:03Z generation:2 name:name2 resourceVersion:15345809 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:c77a8722-adab-4836-b751-c968b36fb174] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 20:44:56.041: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-6118" for this suite. • [SLOW TEST:65.370 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:41 watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":278,"completed":8,"skipped":94,"failed":0} SSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 20:44:56.049: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 20:45:01.847: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-9341" for this suite. • [SLOW TEST:6.033 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":278,"completed":9,"skipped":97,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 20:45:02.082: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 11 20:45:04.873: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 11 20:45:07.087: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724826705, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724826705, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724826706, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724826704, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 20:45:09.218: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724826705, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724826705, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724826706, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724826704, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 20:45:11.231: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724826705, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724826705, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724826706, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724826704, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 20:45:13.092: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724826705, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724826705, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724826706, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724826704, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 11 20:45:16.300: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 20:45:16.305: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3557" for this suite. STEP: Destroying namespace "webhook-3557-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:14.942 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":278,"completed":10,"skipped":105,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 20:45:17.024: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 11 20:45:18.924: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 11 20:45:21.291: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724826718, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724826718, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724826719, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724826718, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 20:45:23.524: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724826718, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724826718, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724826719, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724826718, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 20:45:25.442: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724826718, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724826718, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724826719, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724826718, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 20:45:27.295: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724826718, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724826718, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724826719, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724826718, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 11 20:45:30.450: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 11 20:45:30.453: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-3325-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 20:45:33.305: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4157" for this suite. STEP: Destroying namespace "webhook-4157-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:17.030 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":278,"completed":11,"skipped":142,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 20:45:34.055: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod test-webserver-e7bdbc05-b715-49b1-aa36-44f0bf8fb52e in namespace container-probe-9933 May 11 20:45:41.414: INFO: Started pod test-webserver-e7bdbc05-b715-49b1-aa36-44f0bf8fb52e in namespace container-probe-9933 STEP: checking the pod's current state and verifying that restartCount is present May 11 20:45:41.417: INFO: Initial restart count of pod test-webserver-e7bdbc05-b715-49b1-aa36-44f0bf8fb52e is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 20:49:42.166: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9933" for this suite. • [SLOW TEST:248.656 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":12,"skipped":199,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 20:49:42.712: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change May 11 20:49:51.446: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 20:49:52.462: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-236" for this suite. • [SLOW TEST:9.758 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":278,"completed":13,"skipped":255,"failed":0} SSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 20:49:52.471: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 May 11 20:49:52.614: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 11 20:49:52.741: INFO: Waiting for terminating namespaces to be deleted... May 11 20:49:52.764: INFO: Logging pods the kubelet thinks is on node jerma-worker before test May 11 20:49:52.783: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 11 20:49:52.783: INFO: Container kindnet-cni ready: true, restart count 0 May 11 20:49:52.783: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 11 20:49:52.783: INFO: Container kube-proxy ready: true, restart count 0 May 11 20:49:52.783: INFO: pod-adoption-release from replicaset-236 started at 2020-05-11 20:49:44 +0000 UTC (1 container statuses recorded) May 11 20:49:52.783: INFO: Container pod-adoption-release ready: true, restart count 0 May 11 20:49:52.783: INFO: pod-adoption-release-qqwq7 from replicaset-236 started at 2020-05-11 20:49:51 +0000 UTC (1 container statuses recorded) May 11 20:49:52.783: INFO: Container pod-adoption-release ready: false, restart count 0 May 11 20:49:52.783: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test May 11 20:49:52.804: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) May 11 20:49:52.804: INFO: Container kube-hunter ready: false, restart count 0 May 11 20:49:52.804: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 11 20:49:52.804: INFO: Container kindnet-cni ready: true, restart count 0 May 11 20:49:52.804: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) May 11 20:49:52.804: INFO: Container kube-bench ready: false, restart count 0 May 11 20:49:52.804: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 11 20:49:52.804: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-af84ab34-52a7-4982-a92f-a7c19fcf74ce 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-af84ab34-52a7-4982-a92f-a7c19fcf74ce off the node jerma-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-af84ab34-52a7-4982-a92f-a7c19fcf74ce [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 20:50:17.792: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-6720" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:25.329 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":278,"completed":14,"skipped":262,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 20:50:17.800: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-a113288c-0570-4ab8-b91d-e85577fac5bb STEP: Creating a pod to test consume configMaps May 11 20:50:18.168: INFO: Waiting up to 5m0s for pod "pod-configmaps-bf797ad5-172d-406a-891d-af53ffa5de0a" in namespace "configmap-5986" to be "success or failure" May 11 20:50:18.226: INFO: Pod "pod-configmaps-bf797ad5-172d-406a-891d-af53ffa5de0a": Phase="Pending", Reason="", readiness=false. Elapsed: 57.937927ms May 11 20:50:20.229: INFO: Pod "pod-configmaps-bf797ad5-172d-406a-891d-af53ffa5de0a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061041882s May 11 20:50:22.301: INFO: Pod "pod-configmaps-bf797ad5-172d-406a-891d-af53ffa5de0a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.132345876s May 11 20:50:24.324: INFO: Pod "pod-configmaps-bf797ad5-172d-406a-891d-af53ffa5de0a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.156248129s STEP: Saw pod success May 11 20:50:24.325: INFO: Pod "pod-configmaps-bf797ad5-172d-406a-891d-af53ffa5de0a" satisfied condition "success or failure" May 11 20:50:24.418: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-bf797ad5-172d-406a-891d-af53ffa5de0a container configmap-volume-test: STEP: delete the pod May 11 20:50:24.912: INFO: Waiting for pod pod-configmaps-bf797ad5-172d-406a-891d-af53ffa5de0a to disappear May 11 20:50:25.175: INFO: Pod pod-configmaps-bf797ad5-172d-406a-891d-af53ffa5de0a no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 20:50:25.175: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5986" for this suite. • [SLOW TEST:7.739 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":15,"skipped":303,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 20:50:25.539: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-projected-all-test-volume-5eb26cab-f9f5-43bc-8f22-be9a921a7112 STEP: Creating secret with name secret-projected-all-test-volume-b46b7b0d-b3a2-49aa-8709-b485339b1f90 STEP: Creating a pod to test Check all projections for projected volume plugin May 11 20:50:26.510: INFO: Waiting up to 5m0s for pod "projected-volume-4a62c047-9d96-46bc-9570-deb52d8ab037" in namespace "projected-7074" to be "success or failure" May 11 20:50:26.668: INFO: Pod "projected-volume-4a62c047-9d96-46bc-9570-deb52d8ab037": Phase="Pending", Reason="", readiness=false. Elapsed: 158.210917ms May 11 20:50:28.672: INFO: Pod "projected-volume-4a62c047-9d96-46bc-9570-deb52d8ab037": Phase="Pending", Reason="", readiness=false. Elapsed: 2.162355487s May 11 20:50:30.804: INFO: Pod "projected-volume-4a62c047-9d96-46bc-9570-deb52d8ab037": Phase="Pending", Reason="", readiness=false. Elapsed: 4.294104717s May 11 20:50:32.808: INFO: Pod "projected-volume-4a62c047-9d96-46bc-9570-deb52d8ab037": Phase="Running", Reason="", readiness=true. Elapsed: 6.297948418s May 11 20:50:34.819: INFO: Pod "projected-volume-4a62c047-9d96-46bc-9570-deb52d8ab037": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.308966445s STEP: Saw pod success May 11 20:50:34.819: INFO: Pod "projected-volume-4a62c047-9d96-46bc-9570-deb52d8ab037" satisfied condition "success or failure" May 11 20:50:34.821: INFO: Trying to get logs from node jerma-worker pod projected-volume-4a62c047-9d96-46bc-9570-deb52d8ab037 container projected-all-volume-test: STEP: delete the pod May 11 20:50:34.871: INFO: Waiting for pod projected-volume-4a62c047-9d96-46bc-9570-deb52d8ab037 to disappear May 11 20:50:34.890: INFO: Pod projected-volume-4a62c047-9d96-46bc-9570-deb52d8ab037 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 20:50:34.890: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7074" for this suite. • [SLOW TEST:9.358 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":278,"completed":16,"skipped":338,"failed":0} SSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 20:50:34.898: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 11 20:50:35.039: INFO: (0) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 4.519443ms) May 11 20:50:35.043: INFO: (1) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.871262ms) May 11 20:50:35.046: INFO: (2) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.872181ms) May 11 20:50:35.048: INFO: (3) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.717787ms) May 11 20:50:35.051: INFO: (4) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.512806ms) May 11 20:50:35.054: INFO: (5) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.617276ms) May 11 20:50:35.206: INFO: (6) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 152.408623ms) May 11 20:50:35.210: INFO: (7) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 4.063733ms) May 11 20:50:35.214: INFO: (8) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.580146ms) May 11 20:50:35.216: INFO: (9) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.621319ms) May 11 20:50:35.219: INFO: (10) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.356114ms) May 11 20:50:35.221: INFO: (11) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.648148ms) May 11 20:50:35.225: INFO: (12) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.063067ms) May 11 20:50:35.228: INFO: (13) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.393393ms) May 11 20:50:35.232: INFO: (14) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.63089ms) May 11 20:50:35.235: INFO: (15) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.839888ms) May 11 20:50:35.237: INFO: (16) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.036772ms) May 11 20:50:35.239: INFO: (17) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.235734ms) May 11 20:50:35.241: INFO: (18) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.134592ms) May 11 20:50:35.243: INFO: (19) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.008156ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 20:50:35.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-2824" for this suite. •{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance]","total":278,"completed":17,"skipped":341,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 20:50:35.249: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on node default medium May 11 20:50:35.736: INFO: Waiting up to 5m0s for pod "pod-2325015c-896e-4506-aa2d-a215f4c08091" in namespace "emptydir-2974" to be "success or failure" May 11 20:50:35.905: INFO: Pod "pod-2325015c-896e-4506-aa2d-a215f4c08091": Phase="Pending", Reason="", readiness=false. Elapsed: 169.339828ms May 11 20:50:37.908: INFO: Pod "pod-2325015c-896e-4506-aa2d-a215f4c08091": Phase="Pending", Reason="", readiness=false. Elapsed: 2.172254671s May 11 20:50:40.035: INFO: Pod "pod-2325015c-896e-4506-aa2d-a215f4c08091": Phase="Pending", Reason="", readiness=false. Elapsed: 4.299214492s May 11 20:50:42.059: INFO: Pod "pod-2325015c-896e-4506-aa2d-a215f4c08091": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.322914906s STEP: Saw pod success May 11 20:50:42.059: INFO: Pod "pod-2325015c-896e-4506-aa2d-a215f4c08091" satisfied condition "success or failure" May 11 20:50:42.062: INFO: Trying to get logs from node jerma-worker pod pod-2325015c-896e-4506-aa2d-a215f4c08091 container test-container: STEP: delete the pod May 11 20:50:42.194: INFO: Waiting for pod pod-2325015c-896e-4506-aa2d-a215f4c08091 to disappear May 11 20:50:42.214: INFO: Pod pod-2325015c-896e-4506-aa2d-a215f4c08091 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 20:50:42.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2974" for this suite. • [SLOW TEST:6.974 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":18,"skipped":360,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 20:50:42.223: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container May 11 20:50:52.668: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-7776 PodName:pod-sharedvolume-fa657e6e-a05f-4f40-9f2c-d96f5d2818bd ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 11 20:50:52.668: INFO: >>> kubeConfig: /root/.kube/config I0511 20:50:52.697748 6 log.go:172] (0xc000f8c630) (0xc00278da40) Create stream I0511 20:50:52.697771 6 log.go:172] (0xc000f8c630) (0xc00278da40) Stream added, broadcasting: 1 I0511 20:50:52.699398 6 log.go:172] (0xc000f8c630) Reply frame received for 1 I0511 20:50:52.699439 6 log.go:172] (0xc000f8c630) (0xc00241e000) Create stream I0511 20:50:52.699452 6 log.go:172] (0xc000f8c630) (0xc00241e000) Stream added, broadcasting: 3 I0511 20:50:52.700348 6 log.go:172] (0xc000f8c630) Reply frame received for 3 I0511 20:50:52.700385 6 log.go:172] (0xc000f8c630) (0xc00278dae0) Create stream I0511 20:50:52.700397 6 log.go:172] (0xc000f8c630) (0xc00278dae0) Stream added, broadcasting: 5 I0511 20:50:52.701411 6 log.go:172] (0xc000f8c630) Reply frame received for 5 I0511 20:50:52.806298 6 log.go:172] (0xc000f8c630) Data frame received for 3 I0511 20:50:52.806320 6 log.go:172] (0xc00241e000) (3) Data frame handling I0511 20:50:52.806339 6 log.go:172] (0xc00241e000) (3) Data frame sent I0511 20:50:52.806346 6 log.go:172] (0xc000f8c630) Data frame received for 3 I0511 20:50:52.806351 6 log.go:172] (0xc00241e000) (3) Data frame handling I0511 20:50:52.806635 6 log.go:172] (0xc000f8c630) Data frame received for 5 I0511 20:50:52.806655 6 log.go:172] (0xc00278dae0) (5) Data frame handling I0511 20:50:52.810304 6 log.go:172] (0xc000f8c630) Data frame received for 1 I0511 20:50:52.810324 6 log.go:172] (0xc00278da40) (1) Data frame handling I0511 20:50:52.810333 6 log.go:172] (0xc00278da40) (1) Data frame sent I0511 20:50:52.810345 6 log.go:172] (0xc000f8c630) (0xc00278da40) Stream removed, broadcasting: 1 I0511 20:50:52.810355 6 log.go:172] (0xc000f8c630) Go away received I0511 20:50:52.810631 6 log.go:172] (0xc000f8c630) (0xc00278da40) Stream removed, broadcasting: 1 I0511 20:50:52.810651 6 log.go:172] (0xc000f8c630) (0xc00241e000) Stream removed, broadcasting: 3 I0511 20:50:52.810658 6 log.go:172] (0xc000f8c630) (0xc00278dae0) Stream removed, broadcasting: 5 May 11 20:50:52.810: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 20:50:52.810: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7776" for this suite. • [SLOW TEST:10.610 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":278,"completed":19,"skipped":377,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 20:50:52.833: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 11 20:50:59.747: INFO: Waiting up to 5m0s for pod "client-envvars-bd303968-a13d-4e23-a339-047168a6d83d" in namespace "pods-4264" to be "success or failure" May 11 20:50:59.894: INFO: Pod "client-envvars-bd303968-a13d-4e23-a339-047168a6d83d": Phase="Pending", Reason="", readiness=false. Elapsed: 147.086045ms May 11 20:51:02.108: INFO: Pod "client-envvars-bd303968-a13d-4e23-a339-047168a6d83d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.361281737s May 11 20:51:04.112: INFO: Pod "client-envvars-bd303968-a13d-4e23-a339-047168a6d83d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.364803466s May 11 20:51:06.114: INFO: Pod "client-envvars-bd303968-a13d-4e23-a339-047168a6d83d": Phase="Running", Reason="", readiness=true. Elapsed: 6.36721587s May 11 20:51:08.117: INFO: Pod "client-envvars-bd303968-a13d-4e23-a339-047168a6d83d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.370058466s STEP: Saw pod success May 11 20:51:08.117: INFO: Pod "client-envvars-bd303968-a13d-4e23-a339-047168a6d83d" satisfied condition "success or failure" May 11 20:51:08.119: INFO: Trying to get logs from node jerma-worker2 pod client-envvars-bd303968-a13d-4e23-a339-047168a6d83d container env3cont: STEP: delete the pod May 11 20:51:08.162: INFO: Waiting for pod client-envvars-bd303968-a13d-4e23-a339-047168a6d83d to disappear May 11 20:51:08.172: INFO: Pod client-envvars-bd303968-a13d-4e23-a339-047168a6d83d no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 20:51:08.172: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4264" for this suite. • [SLOW TEST:15.345 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":278,"completed":20,"skipped":390,"failed":0} [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 20:51:08.178: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 20:51:47.999: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-2041" for this suite. • [SLOW TEST:39.826 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":278,"completed":21,"skipped":390,"failed":0} SSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 20:51:48.005: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 11 20:51:48.218: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' May 11 20:51:48.427: INFO: stderr: "" May 11 20:51:48.427: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.4\", GitCommit:\"8d8aa39598534325ad77120c120a22b3a990b5ea\", GitTreeState:\"clean\", BuildDate:\"2020-05-06T19:23:43Z\", GoVersion:\"go1.13.10\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.2\", GitCommit:\"59603c6e503c87169aea6106f57b9f242f64df89\", GitTreeState:\"clean\", BuildDate:\"2020-02-07T01:05:17Z\", GoVersion:\"go1.13.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 20:51:48.428: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1396" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":278,"completed":22,"skipped":399,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 20:51:48.437: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 11 20:51:48.588: INFO: Waiting up to 5m0s for pod "busybox-user-65534-fc6244ae-0eb0-4811-95cc-6cbb5f5e671a" in namespace "security-context-test-4426" to be "success or failure" May 11 20:51:48.661: INFO: Pod "busybox-user-65534-fc6244ae-0eb0-4811-95cc-6cbb5f5e671a": Phase="Pending", Reason="", readiness=false. Elapsed: 72.840968ms May 11 20:51:50.664: INFO: Pod "busybox-user-65534-fc6244ae-0eb0-4811-95cc-6cbb5f5e671a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.076001995s May 11 20:51:52.709: INFO: Pod "busybox-user-65534-fc6244ae-0eb0-4811-95cc-6cbb5f5e671a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.120698857s May 11 20:51:54.726: INFO: Pod "busybox-user-65534-fc6244ae-0eb0-4811-95cc-6cbb5f5e671a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.137907963s May 11 20:51:54.726: INFO: Pod "busybox-user-65534-fc6244ae-0eb0-4811-95cc-6cbb5f5e671a" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 20:51:54.726: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-4426" for this suite. • [SLOW TEST:6.574 seconds] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 When creating a container with runAsUser /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:43 should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":23,"skipped":435,"failed":0} [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 20:51:55.011: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on tmpfs May 11 20:51:55.459: INFO: Waiting up to 5m0s for pod "pod-4cdfeb6d-085f-462b-bb6a-c6df918a0073" in namespace "emptydir-9980" to be "success or failure" May 11 20:51:55.507: INFO: Pod "pod-4cdfeb6d-085f-462b-bb6a-c6df918a0073": Phase="Pending", Reason="", readiness=false. Elapsed: 47.079831ms May 11 20:51:57.510: INFO: Pod "pod-4cdfeb6d-085f-462b-bb6a-c6df918a0073": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050325162s May 11 20:51:59.613: INFO: Pod "pod-4cdfeb6d-085f-462b-bb6a-c6df918a0073": Phase="Pending", Reason="", readiness=false. Elapsed: 4.153923723s May 11 20:52:01.728: INFO: Pod "pod-4cdfeb6d-085f-462b-bb6a-c6df918a0073": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.268165306s STEP: Saw pod success May 11 20:52:01.728: INFO: Pod "pod-4cdfeb6d-085f-462b-bb6a-c6df918a0073" satisfied condition "success or failure" May 11 20:52:01.730: INFO: Trying to get logs from node jerma-worker2 pod pod-4cdfeb6d-085f-462b-bb6a-c6df918a0073 container test-container: STEP: delete the pod May 11 20:52:01.775: INFO: Waiting for pod pod-4cdfeb6d-085f-462b-bb6a-c6df918a0073 to disappear May 11 20:52:01.796: INFO: Pod pod-4cdfeb6d-085f-462b-bb6a-c6df918a0073 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 20:52:01.796: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9980" for this suite. • [SLOW TEST:6.791 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":24,"skipped":435,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 20:52:01.803: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod May 11 20:52:06.701: INFO: Successfully updated pod "pod-update-activedeadlineseconds-7e578696-dc1c-4a19-a909-2f29c3ee1b68" May 11 20:52:06.701: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-7e578696-dc1c-4a19-a909-2f29c3ee1b68" in namespace "pods-4694" to be "terminated due to deadline exceeded" May 11 20:52:06.712: INFO: Pod "pod-update-activedeadlineseconds-7e578696-dc1c-4a19-a909-2f29c3ee1b68": Phase="Running", Reason="", readiness=true. Elapsed: 11.019798ms May 11 20:52:08.716: INFO: Pod "pod-update-activedeadlineseconds-7e578696-dc1c-4a19-a909-2f29c3ee1b68": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.014746478s May 11 20:52:08.716: INFO: Pod "pod-update-activedeadlineseconds-7e578696-dc1c-4a19-a909-2f29c3ee1b68" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 20:52:08.716: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4694" for this suite. • [SLOW TEST:6.920 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":278,"completed":25,"skipped":460,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 20:52:08.724: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-3b49692e-096f-4723-a613-cba222fd0ae6 STEP: Creating a pod to test consume configMaps May 11 20:52:08.945: INFO: Waiting up to 5m0s for pod "pod-configmaps-74aeee6f-10bd-4021-9a6c-743ef51c58d2" in namespace "configmap-3612" to be "success or failure" May 11 20:52:08.959: INFO: Pod "pod-configmaps-74aeee6f-10bd-4021-9a6c-743ef51c58d2": Phase="Pending", Reason="", readiness=false. Elapsed: 13.689842ms May 11 20:52:11.027: INFO: Pod "pod-configmaps-74aeee6f-10bd-4021-9a6c-743ef51c58d2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.082043429s May 11 20:52:13.031: INFO: Pod "pod-configmaps-74aeee6f-10bd-4021-9a6c-743ef51c58d2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.085979066s May 11 20:52:15.039: INFO: Pod "pod-configmaps-74aeee6f-10bd-4021-9a6c-743ef51c58d2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.093616684s STEP: Saw pod success May 11 20:52:15.039: INFO: Pod "pod-configmaps-74aeee6f-10bd-4021-9a6c-743ef51c58d2" satisfied condition "success or failure" May 11 20:52:15.041: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-74aeee6f-10bd-4021-9a6c-743ef51c58d2 container configmap-volume-test: STEP: delete the pod May 11 20:52:15.304: INFO: Waiting for pod pod-configmaps-74aeee6f-10bd-4021-9a6c-743ef51c58d2 to disappear May 11 20:52:15.357: INFO: Pod pod-configmaps-74aeee6f-10bd-4021-9a6c-743ef51c58d2 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 20:52:15.357: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3612" for this suite. • [SLOW TEST:6.642 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":26,"skipped":482,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 20:52:15.366: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-32f54a02-20a2-443f-b111-1243d8f5a371 STEP: Creating a pod to test consume configMaps May 11 20:52:15.686: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-dd9eda5d-3d9e-4b6f-9ebe-860c467d28d0" in namespace "projected-1837" to be "success or failure" May 11 20:52:15.690: INFO: Pod "pod-projected-configmaps-dd9eda5d-3d9e-4b6f-9ebe-860c467d28d0": Phase="Pending", Reason="", readiness=false. Elapsed: 3.819926ms May 11 20:52:17.764: INFO: Pod "pod-projected-configmaps-dd9eda5d-3d9e-4b6f-9ebe-860c467d28d0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.077895999s May 11 20:52:19.767: INFO: Pod "pod-projected-configmaps-dd9eda5d-3d9e-4b6f-9ebe-860c467d28d0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.081133646s May 11 20:52:21.818: INFO: Pod "pod-projected-configmaps-dd9eda5d-3d9e-4b6f-9ebe-860c467d28d0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.131669864s STEP: Saw pod success May 11 20:52:21.818: INFO: Pod "pod-projected-configmaps-dd9eda5d-3d9e-4b6f-9ebe-860c467d28d0" satisfied condition "success or failure" May 11 20:52:21.820: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-dd9eda5d-3d9e-4b6f-9ebe-860c467d28d0 container projected-configmap-volume-test: STEP: delete the pod May 11 20:52:21.994: INFO: Waiting for pod pod-projected-configmaps-dd9eda5d-3d9e-4b6f-9ebe-860c467d28d0 to disappear May 11 20:52:22.027: INFO: Pod pod-projected-configmaps-dd9eda5d-3d9e-4b6f-9ebe-860c467d28d0 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 20:52:22.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1837" for this suite. • [SLOW TEST:6.716 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":278,"completed":27,"skipped":510,"failed":0} SSSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 20:52:22.083: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9402.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-9402.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9402.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-9402.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 11 20:52:32.502: INFO: DNS probes using dns-test-13d8e60c-941a-4a45-992b-6c680f1298e2 succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9402.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-9402.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9402.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-9402.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 11 20:52:41.474: INFO: File wheezy_udp@dns-test-service-3.dns-9402.svc.cluster.local from pod dns-9402/dns-test-619cfdee-5ec3-4f90-9ab1-12553bf7fa7c contains 'foo.example.com. ' instead of 'bar.example.com.' May 11 20:52:41.476: INFO: File jessie_udp@dns-test-service-3.dns-9402.svc.cluster.local from pod dns-9402/dns-test-619cfdee-5ec3-4f90-9ab1-12553bf7fa7c contains 'foo.example.com. ' instead of 'bar.example.com.' May 11 20:52:41.476: INFO: Lookups using dns-9402/dns-test-619cfdee-5ec3-4f90-9ab1-12553bf7fa7c failed for: [wheezy_udp@dns-test-service-3.dns-9402.svc.cluster.local jessie_udp@dns-test-service-3.dns-9402.svc.cluster.local] May 11 20:52:46.668: INFO: File wheezy_udp@dns-test-service-3.dns-9402.svc.cluster.local from pod dns-9402/dns-test-619cfdee-5ec3-4f90-9ab1-12553bf7fa7c contains 'foo.example.com. ' instead of 'bar.example.com.' May 11 20:52:46.963: INFO: File jessie_udp@dns-test-service-3.dns-9402.svc.cluster.local from pod dns-9402/dns-test-619cfdee-5ec3-4f90-9ab1-12553bf7fa7c contains 'foo.example.com. ' instead of 'bar.example.com.' May 11 20:52:46.963: INFO: Lookups using dns-9402/dns-test-619cfdee-5ec3-4f90-9ab1-12553bf7fa7c failed for: [wheezy_udp@dns-test-service-3.dns-9402.svc.cluster.local jessie_udp@dns-test-service-3.dns-9402.svc.cluster.local] May 11 20:52:51.480: INFO: File wheezy_udp@dns-test-service-3.dns-9402.svc.cluster.local from pod dns-9402/dns-test-619cfdee-5ec3-4f90-9ab1-12553bf7fa7c contains 'foo.example.com. ' instead of 'bar.example.com.' May 11 20:52:51.482: INFO: File jessie_udp@dns-test-service-3.dns-9402.svc.cluster.local from pod dns-9402/dns-test-619cfdee-5ec3-4f90-9ab1-12553bf7fa7c contains 'foo.example.com. ' instead of 'bar.example.com.' May 11 20:52:51.482: INFO: Lookups using dns-9402/dns-test-619cfdee-5ec3-4f90-9ab1-12553bf7fa7c failed for: [wheezy_udp@dns-test-service-3.dns-9402.svc.cluster.local jessie_udp@dns-test-service-3.dns-9402.svc.cluster.local] May 11 20:52:56.480: INFO: File wheezy_udp@dns-test-service-3.dns-9402.svc.cluster.local from pod dns-9402/dns-test-619cfdee-5ec3-4f90-9ab1-12553bf7fa7c contains 'foo.example.com. ' instead of 'bar.example.com.' May 11 20:52:56.484: INFO: File jessie_udp@dns-test-service-3.dns-9402.svc.cluster.local from pod dns-9402/dns-test-619cfdee-5ec3-4f90-9ab1-12553bf7fa7c contains 'foo.example.com. ' instead of 'bar.example.com.' May 11 20:52:56.484: INFO: Lookups using dns-9402/dns-test-619cfdee-5ec3-4f90-9ab1-12553bf7fa7c failed for: [wheezy_udp@dns-test-service-3.dns-9402.svc.cluster.local jessie_udp@dns-test-service-3.dns-9402.svc.cluster.local] May 11 20:53:01.492: INFO: DNS probes using dns-test-619cfdee-5ec3-4f90-9ab1-12553bf7fa7c succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9402.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-9402.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9402.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-9402.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 11 20:53:12.197: INFO: DNS probes using dns-test-4dd62dfa-34ca-47ca-a327-f760da27aa64 succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 20:53:12.666: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-9402" for this suite. • [SLOW TEST:50.809 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":278,"completed":28,"skipped":516,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 20:53:12.893: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0511 20:53:53.589464 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 11 20:53:53.589: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 20:53:53.589: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9091" for this suite. • [SLOW TEST:40.704 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":278,"completed":29,"skipped":532,"failed":0} SSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 20:53:53.597: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name cm-test-opt-del-44a85263-f0d8-45ac-916f-89db71c3ce1b STEP: Creating configMap with name cm-test-opt-upd-b2f52287-af13-453e-b620-9bd910613df9 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-44a85263-f0d8-45ac-916f-89db71c3ce1b STEP: Updating configmap cm-test-opt-upd-b2f52287-af13-453e-b620-9bd910613df9 STEP: Creating configMap with name cm-test-opt-create-a541e9e6-2d27-4d34-be47-e44c2995de28 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 20:55:08.453: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-852" for this suite. • [SLOW TEST:74.869 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":30,"skipped":538,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 20:55:08.466: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-5731.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-5731.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5731.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-5731.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-5731.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5731.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 11 20:55:25.946: INFO: Unable to read wheezy_hosts@dns-querier-2.dns-test-service-2.dns-5731.svc.cluster.local from pod dns-5731/dns-test-9a82a481-ac0c-4a50-bbb2-28073c1182b9: Get https://172.30.12.66:32770/api/v1/namespaces/dns-5731/pods/dns-test-9a82a481-ac0c-4a50-bbb2-28073c1182b9/proxy/results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-5731.svc.cluster.local: stream error: stream ID 1765; INTERNAL_ERROR May 11 20:55:28.605: INFO: Lookups using dns-5731/dns-test-9a82a481-ac0c-4a50-bbb2-28073c1182b9 failed for: [wheezy_hosts@dns-querier-2.dns-test-service-2.dns-5731.svc.cluster.local] May 11 20:55:34.373: INFO: DNS probes using dns-5731/dns-test-9a82a481-ac0c-4a50-bbb2-28073c1182b9 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 20:55:36.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5731" for this suite. • [SLOW TEST:28.040 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":278,"completed":31,"skipped":573,"failed":0} SSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 20:55:36.507: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 20:55:46.785: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-9968" for this suite. STEP: Destroying namespace "nsdeletetest-1088" for this suite. May 11 20:55:46.856: INFO: Namespace nsdeletetest-1088 was already deleted STEP: Destroying namespace "nsdeletetest-1246" for this suite. • [SLOW TEST:10.353 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":278,"completed":32,"skipped":576,"failed":0} S ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 20:55:46.860: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name projected-secret-test-04961181-cc88-4a3d-9fc4-432d1ac8a9c1 STEP: Creating a pod to test consume secrets May 11 20:55:47.420: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-2b240fe9-8972-424d-8022-789ccab82d63" in namespace "projected-6234" to be "success or failure" May 11 20:55:47.620: INFO: Pod "pod-projected-secrets-2b240fe9-8972-424d-8022-789ccab82d63": Phase="Pending", Reason="", readiness=false. Elapsed: 199.963338ms May 11 20:55:49.624: INFO: Pod "pod-projected-secrets-2b240fe9-8972-424d-8022-789ccab82d63": Phase="Pending", Reason="", readiness=false. Elapsed: 2.203575865s May 11 20:55:51.652: INFO: Pod "pod-projected-secrets-2b240fe9-8972-424d-8022-789ccab82d63": Phase="Pending", Reason="", readiness=false. Elapsed: 4.231916003s May 11 20:55:53.754: INFO: Pod "pod-projected-secrets-2b240fe9-8972-424d-8022-789ccab82d63": Phase="Running", Reason="", readiness=true. Elapsed: 6.333557721s May 11 20:55:55.757: INFO: Pod "pod-projected-secrets-2b240fe9-8972-424d-8022-789ccab82d63": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.336559232s STEP: Saw pod success May 11 20:55:55.757: INFO: Pod "pod-projected-secrets-2b240fe9-8972-424d-8022-789ccab82d63" satisfied condition "success or failure" May 11 20:55:55.765: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-2b240fe9-8972-424d-8022-789ccab82d63 container secret-volume-test: STEP: delete the pod May 11 20:55:55.801: INFO: Waiting for pod pod-projected-secrets-2b240fe9-8972-424d-8022-789ccab82d63 to disappear May 11 20:55:55.964: INFO: Pod pod-projected-secrets-2b240fe9-8972-424d-8022-789ccab82d63 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 20:55:55.964: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6234" for this suite. • [SLOW TEST:9.112 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":33,"skipped":577,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 20:55:55.973: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 20:56:07.935: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2239" for this suite. • [SLOW TEST:11.970 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":278,"completed":34,"skipped":629,"failed":0} SSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 20:56:07.943: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1585 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 11 20:56:08.290: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-2610' May 11 20:56:15.616: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 11 20:56:15.616: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n" STEP: verifying the rc e2e-test-httpd-rc was created May 11 20:56:15.674: INFO: Waiting for rc e2e-test-httpd-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 STEP: rolling-update to same image controller May 11 20:56:15.768: INFO: scanned /root for discovery docs: May 11 20:56:15.768: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-httpd-rc --update-period=1s --image=docker.io/library/httpd:2.4.38-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-2610' May 11 20:56:34.650: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" May 11 20:56:34.650: INFO: stdout: "Created e2e-test-httpd-rc-82e82d07f73c43f82d24e9a85f515e61\nScaling up e2e-test-httpd-rc-82e82d07f73c43f82d24e9a85f515e61 from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-82e82d07f73c43f82d24e9a85f515e61 up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-82e82d07f73c43f82d24e9a85f515e61 to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n" May 11 20:56:34.650: INFO: stdout: "Created e2e-test-httpd-rc-82e82d07f73c43f82d24e9a85f515e61\nScaling up e2e-test-httpd-rc-82e82d07f73c43f82d24e9a85f515e61 from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-82e82d07f73c43f82d24e9a85f515e61 up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-82e82d07f73c43f82d24e9a85f515e61 to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-httpd-rc pods to come up. May 11 20:56:34.650: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-httpd-rc --namespace=kubectl-2610' May 11 20:56:34.950: INFO: stderr: "" May 11 20:56:34.950: INFO: stdout: "e2e-test-httpd-rc-82e82d07f73c43f82d24e9a85f515e61-hm7k2 " May 11 20:56:34.950: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-82e82d07f73c43f82d24e9a85f515e61-hm7k2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-httpd-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2610' May 11 20:56:35.358: INFO: stderr: "" May 11 20:56:35.358: INFO: stdout: "true" May 11 20:56:35.358: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-82e82d07f73c43f82d24e9a85f515e61-hm7k2 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-httpd-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2610' May 11 20:56:35.541: INFO: stderr: "" May 11 20:56:35.541: INFO: stdout: "docker.io/library/httpd:2.4.38-alpine" May 11 20:56:35.541: INFO: e2e-test-httpd-rc-82e82d07f73c43f82d24e9a85f515e61-hm7k2 is verified up and running [AfterEach] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1591 May 11 20:56:35.542: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-2610' May 11 20:56:35.666: INFO: stderr: "" May 11 20:56:35.666: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 20:56:35.666: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2610" for this suite. • [SLOW TEST:27.735 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1580 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image [Conformance]","total":278,"completed":35,"skipped":635,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 20:56:35.679: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-upd-5a8f83f6-5823-43aa-899c-36dbc3bf493b STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 20:56:42.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6127" for this suite. • [SLOW TEST:7.102 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":36,"skipped":653,"failed":0} SS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 20:56:42.781: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 20:56:50.372: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-8835" for this suite. • [SLOW TEST:7.941 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when scheduling a read only busybox container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187 should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":37,"skipped":655,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 20:56:50.723: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-1a5bd68b-c70b-455b-8cea-9e1ce745a6b1 STEP: Creating a pod to test consume configMaps May 11 20:56:51.830: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-eeb965fd-ab10-4b18-8d80-d4922fa03d64" in namespace "projected-1312" to be "success or failure" May 11 20:56:51.863: INFO: Pod "pod-projected-configmaps-eeb965fd-ab10-4b18-8d80-d4922fa03d64": Phase="Pending", Reason="", readiness=false. Elapsed: 33.36778ms May 11 20:56:53.966: INFO: Pod "pod-projected-configmaps-eeb965fd-ab10-4b18-8d80-d4922fa03d64": Phase="Pending", Reason="", readiness=false. Elapsed: 2.135857102s May 11 20:56:56.058: INFO: Pod "pod-projected-configmaps-eeb965fd-ab10-4b18-8d80-d4922fa03d64": Phase="Pending", Reason="", readiness=false. Elapsed: 4.228244073s May 11 20:56:58.607: INFO: Pod "pod-projected-configmaps-eeb965fd-ab10-4b18-8d80-d4922fa03d64": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.776744714s STEP: Saw pod success May 11 20:56:58.607: INFO: Pod "pod-projected-configmaps-eeb965fd-ab10-4b18-8d80-d4922fa03d64" satisfied condition "success or failure" May 11 20:56:58.610: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-eeb965fd-ab10-4b18-8d80-d4922fa03d64 container projected-configmap-volume-test: STEP: delete the pod May 11 20:56:59.015: INFO: Waiting for pod pod-projected-configmaps-eeb965fd-ab10-4b18-8d80-d4922fa03d64 to disappear May 11 20:56:59.112: INFO: Pod pod-projected-configmaps-eeb965fd-ab10-4b18-8d80-d4922fa03d64 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 20:56:59.112: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1312" for this suite. • [SLOW TEST:8.691 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":38,"skipped":703,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 20:56:59.414: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Ensuring resource quota status captures service creation STEP: Deleting a Service STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 20:57:11.007: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3721" for this suite. • [SLOW TEST:11.598 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":278,"completed":39,"skipped":719,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 20:57:11.012: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 11 20:57:11.196: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6a28b4b2-a763-4aa3-98cd-14d72c2036d2" in namespace "downward-api-2468" to be "success or failure" May 11 20:57:11.326: INFO: Pod "downwardapi-volume-6a28b4b2-a763-4aa3-98cd-14d72c2036d2": Phase="Pending", Reason="", readiness=false. Elapsed: 130.249477ms May 11 20:57:13.396: INFO: Pod "downwardapi-volume-6a28b4b2-a763-4aa3-98cd-14d72c2036d2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.200422882s May 11 20:57:15.401: INFO: Pod "downwardapi-volume-6a28b4b2-a763-4aa3-98cd-14d72c2036d2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.205114218s May 11 20:57:17.403: INFO: Pod "downwardapi-volume-6a28b4b2-a763-4aa3-98cd-14d72c2036d2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.207144649s STEP: Saw pod success May 11 20:57:17.403: INFO: Pod "downwardapi-volume-6a28b4b2-a763-4aa3-98cd-14d72c2036d2" satisfied condition "success or failure" May 11 20:57:17.405: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-6a28b4b2-a763-4aa3-98cd-14d72c2036d2 container client-container: STEP: delete the pod May 11 20:57:17.419: INFO: Waiting for pod downwardapi-volume-6a28b4b2-a763-4aa3-98cd-14d72c2036d2 to disappear May 11 20:57:17.424: INFO: Pod downwardapi-volume-6a28b4b2-a763-4aa3-98cd-14d72c2036d2 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 20:57:17.424: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2468" for this suite. • [SLOW TEST:6.417 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":40,"skipped":729,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 20:57:17.430: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on tmpfs May 11 20:57:17.520: INFO: Waiting up to 5m0s for pod "pod-9194ceb8-3ab0-4504-b36a-3c570b9dfbc9" in namespace "emptydir-2279" to be "success or failure" May 11 20:57:17.534: INFO: Pod "pod-9194ceb8-3ab0-4504-b36a-3c570b9dfbc9": Phase="Pending", Reason="", readiness=false. Elapsed: 14.760111ms May 11 20:57:19.622: INFO: Pod "pod-9194ceb8-3ab0-4504-b36a-3c570b9dfbc9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.102847396s May 11 20:57:21.647: INFO: Pod "pod-9194ceb8-3ab0-4504-b36a-3c570b9dfbc9": Phase="Running", Reason="", readiness=true. Elapsed: 4.127464241s May 11 20:57:23.650: INFO: Pod "pod-9194ceb8-3ab0-4504-b36a-3c570b9dfbc9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.130458716s STEP: Saw pod success May 11 20:57:23.650: INFO: Pod "pod-9194ceb8-3ab0-4504-b36a-3c570b9dfbc9" satisfied condition "success or failure" May 11 20:57:23.652: INFO: Trying to get logs from node jerma-worker pod pod-9194ceb8-3ab0-4504-b36a-3c570b9dfbc9 container test-container: STEP: delete the pod May 11 20:57:23.703: INFO: Waiting for pod pod-9194ceb8-3ab0-4504-b36a-3c570b9dfbc9 to disappear May 11 20:57:23.711: INFO: Pod pod-9194ceb8-3ab0-4504-b36a-3c570b9dfbc9 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 20:57:23.711: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2279" for this suite. • [SLOW TEST:6.287 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":41,"skipped":792,"failed":0} SS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 20:57:23.717: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap that has name configmap-test-emptyKey-cd5ef9d2-05fa-4d10-8ed7-9ef610074221 [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 20:57:23.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3922" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":278,"completed":42,"skipped":794,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 20:57:23.798: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 11 20:57:23.881: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2dae0adc-7b0f-4a9f-815b-0c4783eb5374" in namespace "projected-7000" to be "success or failure" May 11 20:57:23.893: INFO: Pod "downwardapi-volume-2dae0adc-7b0f-4a9f-815b-0c4783eb5374": Phase="Pending", Reason="", readiness=false. Elapsed: 11.815408ms May 11 20:57:25.917: INFO: Pod "downwardapi-volume-2dae0adc-7b0f-4a9f-815b-0c4783eb5374": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036318655s May 11 20:57:27.922: INFO: Pod "downwardapi-volume-2dae0adc-7b0f-4a9f-815b-0c4783eb5374": Phase="Running", Reason="", readiness=true. Elapsed: 4.041058029s May 11 20:57:29.926: INFO: Pod "downwardapi-volume-2dae0adc-7b0f-4a9f-815b-0c4783eb5374": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.044831853s STEP: Saw pod success May 11 20:57:29.926: INFO: Pod "downwardapi-volume-2dae0adc-7b0f-4a9f-815b-0c4783eb5374" satisfied condition "success or failure" May 11 20:57:29.929: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-2dae0adc-7b0f-4a9f-815b-0c4783eb5374 container client-container: STEP: delete the pod May 11 20:57:29.954: INFO: Waiting for pod downwardapi-volume-2dae0adc-7b0f-4a9f-815b-0c4783eb5374 to disappear May 11 20:57:29.959: INFO: Pod downwardapi-volume-2dae0adc-7b0f-4a9f-815b-0c4783eb5374 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 20:57:29.959: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7000" for this suite. • [SLOW TEST:6.167 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":43,"skipped":831,"failed":0} SSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 20:57:29.965: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name cm-test-opt-del-0b8ed4bb-68f3-4cb8-8bdb-ab2c1a8692e4 STEP: Creating configMap with name cm-test-opt-upd-b49c4746-7391-46dc-8358-81379aa7ebda STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-0b8ed4bb-68f3-4cb8-8bdb-ab2c1a8692e4 STEP: Updating configmap cm-test-opt-upd-b49c4746-7391-46dc-8358-81379aa7ebda STEP: Creating configMap with name cm-test-opt-create-f40b1589-7b51-4e4c-bafe-eefa083ab897 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 20:58:59.791: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6164" for this suite. • [SLOW TEST:89.832 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":44,"skipped":836,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 20:58:59.798: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service externalname-service with the type=ExternalName in namespace services-3836 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-3836 I0511 20:59:00.054501 6 runners.go:189] Created replication controller with name: externalname-service, namespace: services-3836, replica count: 2 I0511 20:59:03.104873 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0511 20:59:06.105106 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 11 20:59:06.105: INFO: Creating new exec pod May 11 20:59:11.577: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3836 execpod24f2x -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' May 11 20:59:11.985: INFO: stderr: "I0511 20:59:11.830978 198 log.go:172] (0xc000948630) (0xc00064ff40) Create stream\nI0511 20:59:11.831061 198 log.go:172] (0xc000948630) (0xc00064ff40) Stream added, broadcasting: 1\nI0511 20:59:11.833915 198 log.go:172] (0xc000948630) Reply frame received for 1\nI0511 20:59:11.833949 198 log.go:172] (0xc000948630) (0xc000624820) Create stream\nI0511 20:59:11.833958 198 log.go:172] (0xc000948630) (0xc000624820) Stream added, broadcasting: 3\nI0511 20:59:11.834790 198 log.go:172] (0xc000948630) Reply frame received for 3\nI0511 20:59:11.834823 198 log.go:172] (0xc000948630) (0xc0003535e0) Create stream\nI0511 20:59:11.834835 198 log.go:172] (0xc000948630) (0xc0003535e0) Stream added, broadcasting: 5\nI0511 20:59:11.835796 198 log.go:172] (0xc000948630) Reply frame received for 5\nI0511 20:59:11.973847 198 log.go:172] (0xc000948630) Data frame received for 5\nI0511 20:59:11.973867 198 log.go:172] (0xc0003535e0) (5) Data frame handling\nI0511 20:59:11.973874 198 log.go:172] (0xc0003535e0) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0511 20:59:11.976800 198 log.go:172] (0xc000948630) Data frame received for 5\nI0511 20:59:11.976837 198 log.go:172] (0xc0003535e0) (5) Data frame handling\nI0511 20:59:11.976871 198 log.go:172] (0xc0003535e0) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0511 20:59:11.977733 198 log.go:172] (0xc000948630) Data frame received for 3\nI0511 20:59:11.977756 198 log.go:172] (0xc000624820) (3) Data frame handling\nI0511 20:59:11.978235 198 log.go:172] (0xc000948630) Data frame received for 5\nI0511 20:59:11.978266 198 log.go:172] (0xc0003535e0) (5) Data frame handling\nI0511 20:59:11.979836 198 log.go:172] (0xc000948630) Data frame received for 1\nI0511 20:59:11.979864 198 log.go:172] (0xc00064ff40) (1) Data frame handling\nI0511 20:59:11.979881 198 log.go:172] (0xc00064ff40) (1) Data frame sent\nI0511 20:59:11.979903 198 log.go:172] (0xc000948630) (0xc00064ff40) Stream removed, broadcasting: 1\nI0511 20:59:11.980269 198 log.go:172] (0xc000948630) (0xc00064ff40) Stream removed, broadcasting: 1\nI0511 20:59:11.980330 198 log.go:172] (0xc000948630) (0xc000624820) Stream removed, broadcasting: 3\nI0511 20:59:11.980349 198 log.go:172] (0xc000948630) (0xc0003535e0) Stream removed, broadcasting: 5\n" May 11 20:59:11.985: INFO: stdout: "" May 11 20:59:11.986: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3836 execpod24f2x -- /bin/sh -x -c nc -zv -t -w 2 10.98.137.149 80' May 11 20:59:12.167: INFO: stderr: "I0511 20:59:12.102532 219 log.go:172] (0xc000bea000) (0xc000b74000) Create stream\nI0511 20:59:12.102588 219 log.go:172] (0xc000bea000) (0xc000b74000) Stream added, broadcasting: 1\nI0511 20:59:12.104751 219 log.go:172] (0xc000bea000) Reply frame received for 1\nI0511 20:59:12.104793 219 log.go:172] (0xc000bea000) (0xc000b740a0) Create stream\nI0511 20:59:12.104805 219 log.go:172] (0xc000bea000) (0xc000b740a0) Stream added, broadcasting: 3\nI0511 20:59:12.105825 219 log.go:172] (0xc000bea000) Reply frame received for 3\nI0511 20:59:12.105848 219 log.go:172] (0xc000bea000) (0xc000b74140) Create stream\nI0511 20:59:12.105856 219 log.go:172] (0xc000bea000) (0xc000b74140) Stream added, broadcasting: 5\nI0511 20:59:12.106500 219 log.go:172] (0xc000bea000) Reply frame received for 5\nI0511 20:59:12.160431 219 log.go:172] (0xc000bea000) Data frame received for 3\nI0511 20:59:12.160450 219 log.go:172] (0xc000b740a0) (3) Data frame handling\nI0511 20:59:12.160462 219 log.go:172] (0xc000bea000) Data frame received for 5\nI0511 20:59:12.160475 219 log.go:172] (0xc000b74140) (5) Data frame handling\nI0511 20:59:12.160491 219 log.go:172] (0xc000b74140) (5) Data frame sent\nI0511 20:59:12.160498 219 log.go:172] (0xc000bea000) Data frame received for 5\nI0511 20:59:12.160503 219 log.go:172] (0xc000b74140) (5) Data frame handling\n+ nc -zv -t -w 2 10.98.137.149 80\nConnection to 10.98.137.149 80 port [tcp/http] succeeded!\nI0511 20:59:12.161863 219 log.go:172] (0xc000bea000) Data frame received for 1\nI0511 20:59:12.161902 219 log.go:172] (0xc000b74000) (1) Data frame handling\nI0511 20:59:12.161923 219 log.go:172] (0xc000b74000) (1) Data frame sent\nI0511 20:59:12.161949 219 log.go:172] (0xc000bea000) (0xc000b74000) Stream removed, broadcasting: 1\nI0511 20:59:12.162035 219 log.go:172] (0xc000bea000) Go away received\nI0511 20:59:12.162459 219 log.go:172] (0xc000bea000) (0xc000b74000) Stream removed, broadcasting: 1\nI0511 20:59:12.162478 219 log.go:172] (0xc000bea000) (0xc000b740a0) Stream removed, broadcasting: 3\nI0511 20:59:12.162490 219 log.go:172] (0xc000bea000) (0xc000b74140) Stream removed, broadcasting: 5\n" May 11 20:59:12.167: INFO: stdout: "" May 11 20:59:12.167: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 20:59:12.219: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3836" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:12.437 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":278,"completed":45,"skipped":860,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 20:59:12.235: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-8077, will wait for the garbage collector to delete the pods May 11 20:59:18.452: INFO: Deleting Job.batch foo took: 6.602252ms May 11 20:59:18.652: INFO: Terminating Job.batch foo pods took: 200.226608ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 20:59:59.590: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-8077" for this suite. • [SLOW TEST:47.375 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":278,"completed":46,"skipped":874,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 20:59:59.611: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-configmap-g28q STEP: Creating a pod to test atomic-volume-subpath May 11 20:59:59.887: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-g28q" in namespace "subpath-108" to be "success or failure" May 11 20:59:59.902: INFO: Pod "pod-subpath-test-configmap-g28q": Phase="Pending", Reason="", readiness=false. Elapsed: 15.448768ms May 11 21:00:02.083: INFO: Pod "pod-subpath-test-configmap-g28q": Phase="Pending", Reason="", readiness=false. Elapsed: 2.196063716s May 11 21:00:04.154: INFO: Pod "pod-subpath-test-configmap-g28q": Phase="Running", Reason="", readiness=true. Elapsed: 4.267649419s May 11 21:00:06.351: INFO: Pod "pod-subpath-test-configmap-g28q": Phase="Running", Reason="", readiness=true. Elapsed: 6.464613328s May 11 21:00:08.446: INFO: Pod "pod-subpath-test-configmap-g28q": Phase="Running", Reason="", readiness=true. Elapsed: 8.559788702s May 11 21:00:10.451: INFO: Pod "pod-subpath-test-configmap-g28q": Phase="Running", Reason="", readiness=true. Elapsed: 10.564187912s May 11 21:00:12.454: INFO: Pod "pod-subpath-test-configmap-g28q": Phase="Running", Reason="", readiness=true. Elapsed: 12.566989061s May 11 21:00:14.506: INFO: Pod "pod-subpath-test-configmap-g28q": Phase="Running", Reason="", readiness=true. Elapsed: 14.61975145s May 11 21:00:16.566: INFO: Pod "pod-subpath-test-configmap-g28q": Phase="Running", Reason="", readiness=true. Elapsed: 16.679522585s May 11 21:00:18.620: INFO: Pod "pod-subpath-test-configmap-g28q": Phase="Running", Reason="", readiness=true. Elapsed: 18.733214835s May 11 21:00:20.623: INFO: Pod "pod-subpath-test-configmap-g28q": Phase="Running", Reason="", readiness=true. Elapsed: 20.736169942s May 11 21:00:22.627: INFO: Pod "pod-subpath-test-configmap-g28q": Phase="Running", Reason="", readiness=true. Elapsed: 22.74072673s May 11 21:00:24.631: INFO: Pod "pod-subpath-test-configmap-g28q": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.744552484s STEP: Saw pod success May 11 21:00:24.631: INFO: Pod "pod-subpath-test-configmap-g28q" satisfied condition "success or failure" May 11 21:00:24.634: INFO: Trying to get logs from node jerma-worker2 pod pod-subpath-test-configmap-g28q container test-container-subpath-configmap-g28q: STEP: delete the pod May 11 21:00:24.664: INFO: Waiting for pod pod-subpath-test-configmap-g28q to disappear May 11 21:00:24.691: INFO: Pod pod-subpath-test-configmap-g28q no longer exists STEP: Deleting pod pod-subpath-test-configmap-g28q May 11 21:00:24.692: INFO: Deleting pod "pod-subpath-test-configmap-g28q" in namespace "subpath-108" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:00:24.694: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-108" for this suite. • [SLOW TEST:25.123 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":278,"completed":47,"skipped":905,"failed":0} SSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:00:24.734: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars May 11 21:00:24.890: INFO: Waiting up to 5m0s for pod "downward-api-037143c1-1848-43e2-951b-c0e18583b4d1" in namespace "downward-api-2620" to be "success or failure" May 11 21:00:24.915: INFO: Pod "downward-api-037143c1-1848-43e2-951b-c0e18583b4d1": Phase="Pending", Reason="", readiness=false. Elapsed: 25.142756ms May 11 21:00:27.153: INFO: Pod "downward-api-037143c1-1848-43e2-951b-c0e18583b4d1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.263417622s May 11 21:00:29.597: INFO: Pod "downward-api-037143c1-1848-43e2-951b-c0e18583b4d1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.706801209s STEP: Saw pod success May 11 21:00:29.597: INFO: Pod "downward-api-037143c1-1848-43e2-951b-c0e18583b4d1" satisfied condition "success or failure" May 11 21:00:29.601: INFO: Trying to get logs from node jerma-worker2 pod downward-api-037143c1-1848-43e2-951b-c0e18583b4d1 container dapi-container: STEP: delete the pod May 11 21:00:30.043: INFO: Waiting for pod downward-api-037143c1-1848-43e2-951b-c0e18583b4d1 to disappear May 11 21:00:30.083: INFO: Pod downward-api-037143c1-1848-43e2-951b-c0e18583b4d1 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:00:30.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2620" for this suite. • [SLOW TEST:5.405 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":278,"completed":48,"skipped":908,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:00:30.139: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 11 21:00:30.477: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) May 11 21:00:30.593: INFO: Pod name sample-pod: Found 0 pods out of 1 May 11 21:00:35.632: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 11 21:00:37.639: INFO: Creating deployment "test-rolling-update-deployment" May 11 21:00:37.642: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has May 11 21:00:37.676: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created May 11 21:00:39.835: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected May 11 21:00:40.258: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724827637, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724827637, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724827637, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724827637, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 21:00:42.459: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724827637, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724827637, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724827637, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724827637, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 21:00:44.369: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724827637, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724827637, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724827637, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724827637, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 21:00:46.285: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 May 11 21:00:46.295: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-416 /apis/apps/v1/namespaces/deployment-416/deployments/test-rolling-update-deployment 99a76478-50cd-4c34-a27e-08d2095ad4dd 15350107 1 2020-05-11 21:00:37 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc000999988 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-05-11 21:00:37 +0000 UTC,LastTransitionTime:2020-05-11 21:00:37 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-67cf4f6444" has successfully progressed.,LastUpdateTime:2020-05-11 21:00:44 +0000 UTC,LastTransitionTime:2020-05-11 21:00:37 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} May 11 21:00:46.299: INFO: New ReplicaSet "test-rolling-update-deployment-67cf4f6444" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-67cf4f6444 deployment-416 /apis/apps/v1/namespaces/deployment-416/replicasets/test-rolling-update-deployment-67cf4f6444 01fae0e1-fa3d-4b03-9da7-838f64fc945a 15350096 1 2020-05-11 21:00:37 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment 99a76478-50cd-4c34-a27e-08d2095ad4dd 0xc001c32167 0xc001c32168}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 67cf4f6444,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc001c321d8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} May 11 21:00:46.299: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": May 11 21:00:46.299: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-416 /apis/apps/v1/namespaces/deployment-416/replicasets/test-rolling-update-controller 9631628f-8403-4ef1-b3db-f93594013aaf 15350105 2 2020-05-11 21:00:30 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment 99a76478-50cd-4c34-a27e-08d2095ad4dd 0xc001c32087 0xc001c32088}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc001c320f8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 11 21:00:46.302: INFO: Pod "test-rolling-update-deployment-67cf4f6444-qxp4b" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-67cf4f6444-qxp4b test-rolling-update-deployment-67cf4f6444- deployment-416 /api/v1/namespaces/deployment-416/pods/test-rolling-update-deployment-67cf4f6444-qxp4b bba4b12b-8d88-49d5-923d-526e537c1d03 15350095 0 2020-05-11 21:00:37 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-67cf4f6444 01fae0e1-fa3d-4b03-9da7-838f64fc945a 0xc001afd7d7 0xc001afd7d8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-s7td6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-s7td6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-s7td6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 21:00:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 21:00:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 21:00:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 21:00:37 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.129,StartTime:2020-05-11 21:00:37 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-11 21:00:43 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://60af52a156fab4c60e032e6202448273f1f224c86f2ea69d397fd95f3fde3912,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.129,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:00:46.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-416" for this suite. • [SLOW TEST:16.171 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":49,"skipped":931,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:00:46.311: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-049ec54b-15d9-41ce-b348-e3a4afdce5a2 STEP: Creating a pod to test consume configMaps May 11 21:00:46.487: INFO: Waiting up to 5m0s for pod "pod-configmaps-32b2a571-e7a0-45b4-b28d-526f9428be5d" in namespace "configmap-8349" to be "success or failure" May 11 21:00:46.539: INFO: Pod "pod-configmaps-32b2a571-e7a0-45b4-b28d-526f9428be5d": Phase="Pending", Reason="", readiness=false. Elapsed: 51.986077ms May 11 21:00:48.542: INFO: Pod "pod-configmaps-32b2a571-e7a0-45b4-b28d-526f9428be5d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055229371s May 11 21:00:50.546: INFO: Pod "pod-configmaps-32b2a571-e7a0-45b4-b28d-526f9428be5d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.058774663s May 11 21:00:53.016: INFO: Pod "pod-configmaps-32b2a571-e7a0-45b4-b28d-526f9428be5d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.529119624s STEP: Saw pod success May 11 21:00:53.016: INFO: Pod "pod-configmaps-32b2a571-e7a0-45b4-b28d-526f9428be5d" satisfied condition "success or failure" May 11 21:00:53.372: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-32b2a571-e7a0-45b4-b28d-526f9428be5d container configmap-volume-test: STEP: delete the pod May 11 21:00:53.910: INFO: Waiting for pod pod-configmaps-32b2a571-e7a0-45b4-b28d-526f9428be5d to disappear May 11 21:00:54.129: INFO: Pod pod-configmaps-32b2a571-e7a0-45b4-b28d-526f9428be5d no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:00:54.130: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8349" for this suite. • [SLOW TEST:7.911 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":50,"skipped":965,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:00:54.223: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 11 21:00:56.752: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 11 21:00:58.800: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724827656, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724827656, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724827657, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724827656, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 21:01:01.075: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724827656, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724827656, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724827657, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724827656, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 21:01:03.256: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724827656, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724827656, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724827657, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724827656, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 11 21:01:05.976: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:01:06.254: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1323" for this suite. STEP: Destroying namespace "webhook-1323-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:12.402 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":278,"completed":51,"skipped":972,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:01:06.624: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:01:14.421: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7708" for this suite. • [SLOW TEST:7.804 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":278,"completed":52,"skipped":978,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:01:14.429: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 11 21:01:17.147: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 11 21:01:19.158: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724827677, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724827677, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724827677, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724827676, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 21:01:21.327: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724827677, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724827677, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724827677, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724827676, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 11 21:01:24.200: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:01:25.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5582" for this suite. STEP: Destroying namespace "webhook-5582-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:12.511 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":278,"completed":53,"skipped":1010,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:01:26.940: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 11 21:01:27.462: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:01:34.267: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-265" for this suite. • [SLOW TEST:7.415 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":278,"completed":54,"skipped":1020,"failed":0} SSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:01:34.356: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-643 STEP: creating a selector STEP: Creating the service pods in kubernetes May 11 21:01:34.504: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 11 21:02:02.844: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.132:8080/dial?request=hostname&protocol=http&host=10.244.1.131&port=8080&tries=1'] Namespace:pod-network-test-643 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 11 21:02:02.844: INFO: >>> kubeConfig: /root/.kube/config I0511 21:02:02.880461 6 log.go:172] (0xc001373e40) (0xc00241fb80) Create stream I0511 21:02:02.880491 6 log.go:172] (0xc001373e40) (0xc00241fb80) Stream added, broadcasting: 1 I0511 21:02:02.882923 6 log.go:172] (0xc001373e40) Reply frame received for 1 I0511 21:02:02.882995 6 log.go:172] (0xc001373e40) (0xc002480000) Create stream I0511 21:02:02.883015 6 log.go:172] (0xc001373e40) (0xc002480000) Stream added, broadcasting: 3 I0511 21:02:02.883801 6 log.go:172] (0xc001373e40) Reply frame received for 3 I0511 21:02:02.883837 6 log.go:172] (0xc001373e40) (0xc00241fc20) Create stream I0511 21:02:02.883851 6 log.go:172] (0xc001373e40) (0xc00241fc20) Stream added, broadcasting: 5 I0511 21:02:02.884734 6 log.go:172] (0xc001373e40) Reply frame received for 5 I0511 21:02:02.957895 6 log.go:172] (0xc001373e40) Data frame received for 3 I0511 21:02:02.957921 6 log.go:172] (0xc002480000) (3) Data frame handling I0511 21:02:02.957945 6 log.go:172] (0xc002480000) (3) Data frame sent I0511 21:02:02.958518 6 log.go:172] (0xc001373e40) Data frame received for 3 I0511 21:02:02.958537 6 log.go:172] (0xc002480000) (3) Data frame handling I0511 21:02:02.958553 6 log.go:172] (0xc001373e40) Data frame received for 5 I0511 21:02:02.958559 6 log.go:172] (0xc00241fc20) (5) Data frame handling I0511 21:02:02.960298 6 log.go:172] (0xc001373e40) Data frame received for 1 I0511 21:02:02.960315 6 log.go:172] (0xc00241fb80) (1) Data frame handling I0511 21:02:02.960333 6 log.go:172] (0xc00241fb80) (1) Data frame sent I0511 21:02:02.960348 6 log.go:172] (0xc001373e40) (0xc00241fb80) Stream removed, broadcasting: 1 I0511 21:02:02.960426 6 log.go:172] (0xc001373e40) Go away received I0511 21:02:02.960455 6 log.go:172] (0xc001373e40) (0xc00241fb80) Stream removed, broadcasting: 1 I0511 21:02:02.960469 6 log.go:172] (0xc001373e40) (0xc002480000) Stream removed, broadcasting: 3 I0511 21:02:02.960486 6 log.go:172] (0xc001373e40) (0xc00241fc20) Stream removed, broadcasting: 5 May 11 21:02:02.960: INFO: Waiting for responses: map[] May 11 21:02:02.991: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.132:8080/dial?request=hostname&protocol=http&host=10.244.2.85&port=8080&tries=1'] Namespace:pod-network-test-643 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 11 21:02:02.991: INFO: >>> kubeConfig: /root/.kube/config I0511 21:02:03.024864 6 log.go:172] (0xc000f8c580) (0xc002468000) Create stream I0511 21:02:03.024895 6 log.go:172] (0xc000f8c580) (0xc002468000) Stream added, broadcasting: 1 I0511 21:02:03.027629 6 log.go:172] (0xc000f8c580) Reply frame received for 1 I0511 21:02:03.027657 6 log.go:172] (0xc000f8c580) (0xc0024800a0) Create stream I0511 21:02:03.027665 6 log.go:172] (0xc000f8c580) (0xc0024800a0) Stream added, broadcasting: 3 I0511 21:02:03.028473 6 log.go:172] (0xc000f8c580) Reply frame received for 3 I0511 21:02:03.028504 6 log.go:172] (0xc000f8c580) (0xc002480140) Create stream I0511 21:02:03.028520 6 log.go:172] (0xc000f8c580) (0xc002480140) Stream added, broadcasting: 5 I0511 21:02:03.029598 6 log.go:172] (0xc000f8c580) Reply frame received for 5 I0511 21:02:03.099874 6 log.go:172] (0xc000f8c580) Data frame received for 3 I0511 21:02:03.099909 6 log.go:172] (0xc0024800a0) (3) Data frame handling I0511 21:02:03.099928 6 log.go:172] (0xc0024800a0) (3) Data frame sent I0511 21:02:03.100252 6 log.go:172] (0xc000f8c580) Data frame received for 3 I0511 21:02:03.100350 6 log.go:172] (0xc0024800a0) (3) Data frame handling I0511 21:02:03.100448 6 log.go:172] (0xc000f8c580) Data frame received for 5 I0511 21:02:03.100468 6 log.go:172] (0xc002480140) (5) Data frame handling I0511 21:02:03.102248 6 log.go:172] (0xc000f8c580) Data frame received for 1 I0511 21:02:03.102303 6 log.go:172] (0xc002468000) (1) Data frame handling I0511 21:02:03.102335 6 log.go:172] (0xc002468000) (1) Data frame sent I0511 21:02:03.102354 6 log.go:172] (0xc000f8c580) (0xc002468000) Stream removed, broadcasting: 1 I0511 21:02:03.102373 6 log.go:172] (0xc000f8c580) Go away received I0511 21:02:03.102491 6 log.go:172] (0xc000f8c580) (0xc002468000) Stream removed, broadcasting: 1 I0511 21:02:03.102560 6 log.go:172] (0xc000f8c580) (0xc0024800a0) Stream removed, broadcasting: 3 I0511 21:02:03.102585 6 log.go:172] (0xc000f8c580) (0xc002480140) Stream removed, broadcasting: 5 May 11 21:02:03.102: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:02:03.102: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-643" for this suite. • [SLOW TEST:28.840 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":55,"skipped":1024,"failed":0} SSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:02:03.196: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-dde1629f-b705-4582-9cbf-9c248f471333 STEP: Creating a pod to test consume configMaps May 11 21:02:05.695: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-3bb4c9ff-3009-46d9-a5e1-63ebb2a465db" in namespace "projected-9877" to be "success or failure" May 11 21:02:05.844: INFO: Pod "pod-projected-configmaps-3bb4c9ff-3009-46d9-a5e1-63ebb2a465db": Phase="Pending", Reason="", readiness=false. Elapsed: 149.693828ms May 11 21:02:08.179: INFO: Pod "pod-projected-configmaps-3bb4c9ff-3009-46d9-a5e1-63ebb2a465db": Phase="Pending", Reason="", readiness=false. Elapsed: 2.484162545s May 11 21:02:10.233: INFO: Pod "pod-projected-configmaps-3bb4c9ff-3009-46d9-a5e1-63ebb2a465db": Phase="Pending", Reason="", readiness=false. Elapsed: 4.538531942s May 11 21:02:12.264: INFO: Pod "pod-projected-configmaps-3bb4c9ff-3009-46d9-a5e1-63ebb2a465db": Phase="Pending", Reason="", readiness=false. Elapsed: 6.569257012s May 11 21:02:14.267: INFO: Pod "pod-projected-configmaps-3bb4c9ff-3009-46d9-a5e1-63ebb2a465db": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.572735981s STEP: Saw pod success May 11 21:02:14.267: INFO: Pod "pod-projected-configmaps-3bb4c9ff-3009-46d9-a5e1-63ebb2a465db" satisfied condition "success or failure" May 11 21:02:14.272: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-3bb4c9ff-3009-46d9-a5e1-63ebb2a465db container projected-configmap-volume-test: STEP: delete the pod May 11 21:02:14.542: INFO: Waiting for pod pod-projected-configmaps-3bb4c9ff-3009-46d9-a5e1-63ebb2a465db to disappear May 11 21:02:14.730: INFO: Pod pod-projected-configmaps-3bb4c9ff-3009-46d9-a5e1-63ebb2a465db no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:02:14.730: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9877" for this suite. • [SLOW TEST:11.543 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":278,"completed":56,"skipped":1029,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:02:14.740: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-secret-4m56 STEP: Creating a pod to test atomic-volume-subpath May 11 21:02:15.496: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-4m56" in namespace "subpath-4727" to be "success or failure" May 11 21:02:15.634: INFO: Pod "pod-subpath-test-secret-4m56": Phase="Pending", Reason="", readiness=false. Elapsed: 138.253847ms May 11 21:02:17.637: INFO: Pod "pod-subpath-test-secret-4m56": Phase="Pending", Reason="", readiness=false. Elapsed: 2.141191918s May 11 21:02:19.670: INFO: Pod "pod-subpath-test-secret-4m56": Phase="Running", Reason="", readiness=true. Elapsed: 4.173935982s May 11 21:02:22.119: INFO: Pod "pod-subpath-test-secret-4m56": Phase="Running", Reason="", readiness=true. Elapsed: 6.623493495s May 11 21:02:24.123: INFO: Pod "pod-subpath-test-secret-4m56": Phase="Running", Reason="", readiness=true. Elapsed: 8.627350532s May 11 21:02:26.127: INFO: Pod "pod-subpath-test-secret-4m56": Phase="Running", Reason="", readiness=true. Elapsed: 10.631231958s May 11 21:02:28.130: INFO: Pod "pod-subpath-test-secret-4m56": Phase="Running", Reason="", readiness=true. Elapsed: 12.633888046s May 11 21:02:30.134: INFO: Pod "pod-subpath-test-secret-4m56": Phase="Running", Reason="", readiness=true. Elapsed: 14.637980964s May 11 21:02:32.238: INFO: Pod "pod-subpath-test-secret-4m56": Phase="Running", Reason="", readiness=true. Elapsed: 16.741967118s May 11 21:02:34.241: INFO: Pod "pod-subpath-test-secret-4m56": Phase="Running", Reason="", readiness=true. Elapsed: 18.74492619s May 11 21:02:36.260: INFO: Pod "pod-subpath-test-secret-4m56": Phase="Running", Reason="", readiness=true. Elapsed: 20.764123853s May 11 21:02:38.299: INFO: Pod "pod-subpath-test-secret-4m56": Phase="Running", Reason="", readiness=true. Elapsed: 22.802626231s May 11 21:02:40.303: INFO: Pod "pod-subpath-test-secret-4m56": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.806919401s STEP: Saw pod success May 11 21:02:40.303: INFO: Pod "pod-subpath-test-secret-4m56" satisfied condition "success or failure" May 11 21:02:40.306: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-secret-4m56 container test-container-subpath-secret-4m56: STEP: delete the pod May 11 21:02:40.686: INFO: Waiting for pod pod-subpath-test-secret-4m56 to disappear May 11 21:02:40.752: INFO: Pod pod-subpath-test-secret-4m56 no longer exists STEP: Deleting pod pod-subpath-test-secret-4m56 May 11 21:02:40.752: INFO: Deleting pod "pod-subpath-test-secret-4m56" in namespace "subpath-4727" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:02:40.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-4727" for this suite. • [SLOW TEST:26.123 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":278,"completed":57,"skipped":1048,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:02:40.863: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override command May 11 21:02:41.048: INFO: Waiting up to 5m0s for pod "client-containers-4ed161bb-104d-493c-8774-f99773a19c06" in namespace "containers-7829" to be "success or failure" May 11 21:02:41.057: INFO: Pod "client-containers-4ed161bb-104d-493c-8774-f99773a19c06": Phase="Pending", Reason="", readiness=false. Elapsed: 8.479458ms May 11 21:02:43.431: INFO: Pod "client-containers-4ed161bb-104d-493c-8774-f99773a19c06": Phase="Pending", Reason="", readiness=false. Elapsed: 2.382256338s May 11 21:02:45.435: INFO: Pod "client-containers-4ed161bb-104d-493c-8774-f99773a19c06": Phase="Running", Reason="", readiness=true. Elapsed: 4.38677247s May 11 21:02:47.439: INFO: Pod "client-containers-4ed161bb-104d-493c-8774-f99773a19c06": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.390742638s STEP: Saw pod success May 11 21:02:47.439: INFO: Pod "client-containers-4ed161bb-104d-493c-8774-f99773a19c06" satisfied condition "success or failure" May 11 21:02:47.442: INFO: Trying to get logs from node jerma-worker2 pod client-containers-4ed161bb-104d-493c-8774-f99773a19c06 container test-container: STEP: delete the pod May 11 21:02:47.502: INFO: Waiting for pod client-containers-4ed161bb-104d-493c-8774-f99773a19c06 to disappear May 11 21:02:47.550: INFO: Pod client-containers-4ed161bb-104d-493c-8774-f99773a19c06 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:02:47.550: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-7829" for this suite. • [SLOW TEST:6.710 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":278,"completed":58,"skipped":1058,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:02:47.574: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:02:51.895: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-6913" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":278,"completed":59,"skipped":1069,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:02:51.902: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 11 21:02:52.389: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 11 21:02:54.547: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724827772, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724827772, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724827772, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724827772, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 11 21:02:57.798: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:02:58.439: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5216" for this suite. STEP: Destroying namespace "webhook-5216-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.773 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":278,"completed":60,"skipped":1092,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:02:58.677: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 11 21:02:58.982: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f240b640-1dd3-45e9-aa4e-e16093471c11" in namespace "projected-2615" to be "success or failure" May 11 21:02:59.162: INFO: Pod "downwardapi-volume-f240b640-1dd3-45e9-aa4e-e16093471c11": Phase="Pending", Reason="", readiness=false. Elapsed: 180.597806ms May 11 21:03:01.167: INFO: Pod "downwardapi-volume-f240b640-1dd3-45e9-aa4e-e16093471c11": Phase="Pending", Reason="", readiness=false. Elapsed: 2.185451036s May 11 21:03:03.452: INFO: Pod "downwardapi-volume-f240b640-1dd3-45e9-aa4e-e16093471c11": Phase="Pending", Reason="", readiness=false. Elapsed: 4.469905972s May 11 21:03:05.479: INFO: Pod "downwardapi-volume-f240b640-1dd3-45e9-aa4e-e16093471c11": Phase="Pending", Reason="", readiness=false. Elapsed: 6.496975276s May 11 21:03:07.561: INFO: Pod "downwardapi-volume-f240b640-1dd3-45e9-aa4e-e16093471c11": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.579436872s STEP: Saw pod success May 11 21:03:07.561: INFO: Pod "downwardapi-volume-f240b640-1dd3-45e9-aa4e-e16093471c11" satisfied condition "success or failure" May 11 21:03:07.820: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-f240b640-1dd3-45e9-aa4e-e16093471c11 container client-container: STEP: delete the pod May 11 21:03:07.844: INFO: Waiting for pod downwardapi-volume-f240b640-1dd3-45e9-aa4e-e16093471c11 to disappear May 11 21:03:08.171: INFO: Pod downwardapi-volume-f240b640-1dd3-45e9-aa4e-e16093471c11 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:03:08.171: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2615" for this suite. • [SLOW TEST:9.502 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":61,"skipped":1177,"failed":0} [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:03:08.179: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:03:15.389: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-4240" for this suite. • [SLOW TEST:7.219 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":278,"completed":62,"skipped":1177,"failed":0} SSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:03:15.398: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 11 21:03:16.120: INFO: Waiting up to 5m0s for pod "downwardapi-volume-20d25193-363f-4160-b252-cae6a3308a44" in namespace "downward-api-7127" to be "success or failure" May 11 21:03:16.289: INFO: Pod "downwardapi-volume-20d25193-363f-4160-b252-cae6a3308a44": Phase="Pending", Reason="", readiness=false. Elapsed: 168.882591ms May 11 21:03:18.292: INFO: Pod "downwardapi-volume-20d25193-363f-4160-b252-cae6a3308a44": Phase="Pending", Reason="", readiness=false. Elapsed: 2.172443844s May 11 21:03:20.298: INFO: Pod "downwardapi-volume-20d25193-363f-4160-b252-cae6a3308a44": Phase="Pending", Reason="", readiness=false. Elapsed: 4.177772021s May 11 21:03:22.588: INFO: Pod "downwardapi-volume-20d25193-363f-4160-b252-cae6a3308a44": Phase="Pending", Reason="", readiness=false. Elapsed: 6.467803502s May 11 21:03:24.711: INFO: Pod "downwardapi-volume-20d25193-363f-4160-b252-cae6a3308a44": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.591230644s STEP: Saw pod success May 11 21:03:24.711: INFO: Pod "downwardapi-volume-20d25193-363f-4160-b252-cae6a3308a44" satisfied condition "success or failure" May 11 21:03:24.713: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-20d25193-363f-4160-b252-cae6a3308a44 container client-container: STEP: delete the pod May 11 21:03:24.904: INFO: Waiting for pod downwardapi-volume-20d25193-363f-4160-b252-cae6a3308a44 to disappear May 11 21:03:25.104: INFO: Pod downwardapi-volume-20d25193-363f-4160-b252-cae6a3308a44 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:03:25.104: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7127" for this suite. • [SLOW TEST:9.713 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":63,"skipped":1182,"failed":0} S ------------------------------ [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:03:25.111: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 11 21:03:26.181: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-6b5ffeea-fa8a-4529-a6b4-358ccc41be26" in namespace "security-context-test-478" to be "success or failure" May 11 21:03:26.455: INFO: Pod "busybox-privileged-false-6b5ffeea-fa8a-4529-a6b4-358ccc41be26": Phase="Pending", Reason="", readiness=false. Elapsed: 273.725753ms May 11 21:03:28.484: INFO: Pod "busybox-privileged-false-6b5ffeea-fa8a-4529-a6b4-358ccc41be26": Phase="Pending", Reason="", readiness=false. Elapsed: 2.302791057s May 11 21:03:30.634: INFO: Pod "busybox-privileged-false-6b5ffeea-fa8a-4529-a6b4-358ccc41be26": Phase="Pending", Reason="", readiness=false. Elapsed: 4.453229604s May 11 21:03:33.108: INFO: Pod "busybox-privileged-false-6b5ffeea-fa8a-4529-a6b4-358ccc41be26": Phase="Pending", Reason="", readiness=false. Elapsed: 6.926681455s May 11 21:03:35.111: INFO: Pod "busybox-privileged-false-6b5ffeea-fa8a-4529-a6b4-358ccc41be26": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.929855784s May 11 21:03:35.111: INFO: Pod "busybox-privileged-false-6b5ffeea-fa8a-4529-a6b4-358ccc41be26" satisfied condition "success or failure" May 11 21:03:35.125: INFO: Got logs for pod "busybox-privileged-false-6b5ffeea-fa8a-4529-a6b4-358ccc41be26": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:03:35.125: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-478" for this suite. • [SLOW TEST:10.021 seconds] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 When creating a pod with privileged /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:225 should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":64,"skipped":1183,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:03:35.133: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod busybox-311cb618-2a35-48d6-bcb0-83ad0ba1f27b in namespace container-probe-5291 May 11 21:03:41.944: INFO: Started pod busybox-311cb618-2a35-48d6-bcb0-83ad0ba1f27b in namespace container-probe-5291 STEP: checking the pod's current state and verifying that restartCount is present May 11 21:03:41.947: INFO: Initial restart count of pod busybox-311cb618-2a35-48d6-bcb0-83ad0ba1f27b is 0 May 11 21:04:30.886: INFO: Restart count of pod container-probe-5291/busybox-311cb618-2a35-48d6-bcb0-83ad0ba1f27b is now 1 (48.938692208s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:04:30.947: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5291" for this suite. • [SLOW TEST:56.068 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":65,"skipped":1214,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:04:31.204: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 11 21:04:31.661: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a0ac60f4-7d3d-4585-a8bd-9f9c3a79800d" in namespace "projected-2111" to be "success or failure" May 11 21:04:31.834: INFO: Pod "downwardapi-volume-a0ac60f4-7d3d-4585-a8bd-9f9c3a79800d": Phase="Pending", Reason="", readiness=false. Elapsed: 172.638615ms May 11 21:04:33.919: INFO: Pod "downwardapi-volume-a0ac60f4-7d3d-4585-a8bd-9f9c3a79800d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.25780509s May 11 21:04:35.922: INFO: Pod "downwardapi-volume-a0ac60f4-7d3d-4585-a8bd-9f9c3a79800d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.261070484s May 11 21:04:37.925: INFO: Pod "downwardapi-volume-a0ac60f4-7d3d-4585-a8bd-9f9c3a79800d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.264047026s STEP: Saw pod success May 11 21:04:37.925: INFO: Pod "downwardapi-volume-a0ac60f4-7d3d-4585-a8bd-9f9c3a79800d" satisfied condition "success or failure" May 11 21:04:37.928: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-a0ac60f4-7d3d-4585-a8bd-9f9c3a79800d container client-container: STEP: delete the pod May 11 21:04:38.008: INFO: Waiting for pod downwardapi-volume-a0ac60f4-7d3d-4585-a8bd-9f9c3a79800d to disappear May 11 21:04:38.199: INFO: Pod downwardapi-volume-a0ac60f4-7d3d-4585-a8bd-9f9c3a79800d no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:04:38.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2111" for this suite. • [SLOW TEST:7.040 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":278,"completed":66,"skipped":1229,"failed":0} SSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:04:38.244: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars May 11 21:04:39.010: INFO: Waiting up to 5m0s for pod "downward-api-c4d234fb-b5ae-4cc7-adb0-69f0833f136e" in namespace "downward-api-2567" to be "success or failure" May 11 21:04:39.187: INFO: Pod "downward-api-c4d234fb-b5ae-4cc7-adb0-69f0833f136e": Phase="Pending", Reason="", readiness=false. Elapsed: 176.288539ms May 11 21:04:41.189: INFO: Pod "downward-api-c4d234fb-b5ae-4cc7-adb0-69f0833f136e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.178801554s May 11 21:04:43.445: INFO: Pod "downward-api-c4d234fb-b5ae-4cc7-adb0-69f0833f136e": Phase="Running", Reason="", readiness=true. Elapsed: 4.434849792s May 11 21:04:45.468: INFO: Pod "downward-api-c4d234fb-b5ae-4cc7-adb0-69f0833f136e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.457731193s STEP: Saw pod success May 11 21:04:45.468: INFO: Pod "downward-api-c4d234fb-b5ae-4cc7-adb0-69f0833f136e" satisfied condition "success or failure" May 11 21:04:45.470: INFO: Trying to get logs from node jerma-worker2 pod downward-api-c4d234fb-b5ae-4cc7-adb0-69f0833f136e container dapi-container: STEP: delete the pod May 11 21:04:45.667: INFO: Waiting for pod downward-api-c4d234fb-b5ae-4cc7-adb0-69f0833f136e to disappear May 11 21:04:46.115: INFO: Pod downward-api-c4d234fb-b5ae-4cc7-adb0-69f0833f136e no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:04:46.115: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2567" for this suite. • [SLOW TEST:7.913 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":278,"completed":67,"skipped":1232,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:04:46.157: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 11 21:04:46.414: INFO: Pod name cleanup-pod: Found 0 pods out of 1 May 11 21:04:51.457: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 11 21:04:53.478: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 May 11 21:04:53.547: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-9155 /apis/apps/v1/namespaces/deployment-9155/deployments/test-cleanup-deployment 74f5803e-c9bd-4665-b555-b7328723dcd9 15351447 1 2020-05-11 21:04:53 +0000 UTC map[name:cleanup-pod] map[] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00196eb78 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},} May 11 21:04:53.568: INFO: New ReplicaSet "test-cleanup-deployment-55ffc6b7b6" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6 deployment-9155 /apis/apps/v1/namespaces/deployment-9155/replicasets/test-cleanup-deployment-55ffc6b7b6 883d5f92-3644-4523-816a-d1dabfaf87c9 15351449 1 2020-05-11 21:04:53 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment 74f5803e-c9bd-4665-b555-b7328723dcd9 0xc0009983a7 0xc0009983a8}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55ffc6b7b6,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc000998418 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 11 21:04:53.568: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": May 11 21:04:53.568: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller deployment-9155 /apis/apps/v1/namespaces/deployment-9155/replicasets/test-cleanup-controller 6d54d2ee-5f0c-4f2b-8c7e-444fa51ef0cb 15351448 1 2020-05-11 21:04:46 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment 74f5803e-c9bd-4665-b555-b7328723dcd9 0xc0009982a7 0xc0009982a8}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc000998308 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} May 11 21:04:53.609: INFO: Pod "test-cleanup-controller-fg5q2" is available: &Pod{ObjectMeta:{test-cleanup-controller-fg5q2 test-cleanup-controller- deployment-9155 /api/v1/namespaces/deployment-9155/pods/test-cleanup-controller-fg5q2 953af6a2-17a4-44db-a2aa-4f7d25bdb05c 15351442 0 2020-05-11 21:04:46 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 ReplicaSet test-cleanup-controller 6d54d2ee-5f0c-4f2b-8c7e-444fa51ef0cb 0xc000998b47 0xc000998b48}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-stzdk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-stzdk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-stzdk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 21:04:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 21:04:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 21:04:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 21:04:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.92,StartTime:2020-05-11 21:04:46 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-11 21:04:51 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://ed67f7eccd9662141b8754159a88aeef774f7567fc35088a16422ce16f42d603,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.92,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 11 21:04:53.610: INFO: Pod "test-cleanup-deployment-55ffc6b7b6-c5zvg" is not available: &Pod{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6-c5zvg test-cleanup-deployment-55ffc6b7b6- deployment-9155 /api/v1/namespaces/deployment-9155/pods/test-cleanup-deployment-55ffc6b7b6-c5zvg d0c36b42-7c14-4fc6-aca9-afecee4a1fe1 15351454 0 2020-05-11 21:04:53 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-55ffc6b7b6 883d5f92-3644-4523-816a-d1dabfaf87c9 0xc000998ec7 0xc000998ec8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-stzdk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-stzdk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-stzdk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 21:04:53 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:04:53.610: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-9155" for this suite. • [SLOW TEST:7.509 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":278,"completed":68,"skipped":1265,"failed":0} SSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:04:53.666: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-3825 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-3825 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-3825 May 11 21:04:53.762: INFO: Found 0 stateful pods, waiting for 1 May 11 21:05:03.765: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod May 11 21:05:03.767: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3825 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 11 21:05:04.380: INFO: stderr: "I0511 21:05:03.883572 239 log.go:172] (0xc000a42dc0) (0xc000609d60) Create stream\nI0511 21:05:03.883617 239 log.go:172] (0xc000a42dc0) (0xc000609d60) Stream added, broadcasting: 1\nI0511 21:05:03.885592 239 log.go:172] (0xc000a42dc0) Reply frame received for 1\nI0511 21:05:03.885630 239 log.go:172] (0xc000a42dc0) (0xc000828000) Create stream\nI0511 21:05:03.885644 239 log.go:172] (0xc000a42dc0) (0xc000828000) Stream added, broadcasting: 3\nI0511 21:05:03.886378 239 log.go:172] (0xc000a42dc0) Reply frame received for 3\nI0511 21:05:03.886406 239 log.go:172] (0xc000a42dc0) (0xc0008280a0) Create stream\nI0511 21:05:03.886415 239 log.go:172] (0xc000a42dc0) (0xc0008280a0) Stream added, broadcasting: 5\nI0511 21:05:03.887084 239 log.go:172] (0xc000a42dc0) Reply frame received for 5\nI0511 21:05:03.957048 239 log.go:172] (0xc000a42dc0) Data frame received for 5\nI0511 21:05:03.957070 239 log.go:172] (0xc0008280a0) (5) Data frame handling\nI0511 21:05:03.957091 239 log.go:172] (0xc0008280a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0511 21:05:04.371491 239 log.go:172] (0xc000a42dc0) Data frame received for 3\nI0511 21:05:04.371550 239 log.go:172] (0xc000828000) (3) Data frame handling\nI0511 21:05:04.371583 239 log.go:172] (0xc000828000) (3) Data frame sent\nI0511 21:05:04.371608 239 log.go:172] (0xc000a42dc0) Data frame received for 3\nI0511 21:05:04.371626 239 log.go:172] (0xc000828000) (3) Data frame handling\nI0511 21:05:04.371935 239 log.go:172] (0xc000a42dc0) Data frame received for 5\nI0511 21:05:04.371969 239 log.go:172] (0xc0008280a0) (5) Data frame handling\nI0511 21:05:04.374539 239 log.go:172] (0xc000a42dc0) Data frame received for 1\nI0511 21:05:04.374583 239 log.go:172] (0xc000609d60) (1) Data frame handling\nI0511 21:05:04.374621 239 log.go:172] (0xc000609d60) (1) Data frame sent\nI0511 21:05:04.374653 239 log.go:172] (0xc000a42dc0) (0xc000609d60) Stream removed, broadcasting: 1\nI0511 21:05:04.374750 239 log.go:172] (0xc000a42dc0) Go away received\nI0511 21:05:04.375887 239 log.go:172] (0xc000a42dc0) (0xc000609d60) Stream removed, broadcasting: 1\nI0511 21:05:04.375935 239 log.go:172] (0xc000a42dc0) (0xc000828000) Stream removed, broadcasting: 3\nI0511 21:05:04.375959 239 log.go:172] (0xc000a42dc0) (0xc0008280a0) Stream removed, broadcasting: 5\n" May 11 21:05:04.381: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 11 21:05:04.381: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 11 21:05:04.384: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 11 21:05:14.421: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 11 21:05:14.421: INFO: Waiting for statefulset status.replicas updated to 0 May 11 21:05:15.757: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999998936s May 11 21:05:16.847: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.992719101s May 11 21:05:17.940: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.903424719s May 11 21:05:18.955: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.810219043s May 11 21:05:20.722: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.795376719s May 11 21:05:21.901: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.028469683s May 11 21:05:22.905: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.849151911s May 11 21:05:23.931: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.844972415s May 11 21:05:24.935: INFO: Verifying statefulset ss doesn't scale past 1 for another 819.27ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-3825 May 11 21:05:25.940: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3825 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 11 21:05:26.560: INFO: stderr: "I0511 21:05:26.465884 257 log.go:172] (0xc000114bb0) (0xc00069fae0) Create stream\nI0511 21:05:26.465945 257 log.go:172] (0xc000114bb0) (0xc00069fae0) Stream added, broadcasting: 1\nI0511 21:05:26.468304 257 log.go:172] (0xc000114bb0) Reply frame received for 1\nI0511 21:05:26.468357 257 log.go:172] (0xc000114bb0) (0xc000ad4000) Create stream\nI0511 21:05:26.468371 257 log.go:172] (0xc000114bb0) (0xc000ad4000) Stream added, broadcasting: 3\nI0511 21:05:26.469633 257 log.go:172] (0xc000114bb0) Reply frame received for 3\nI0511 21:05:26.469693 257 log.go:172] (0xc000114bb0) (0xc00024a000) Create stream\nI0511 21:05:26.469710 257 log.go:172] (0xc000114bb0) (0xc00024a000) Stream added, broadcasting: 5\nI0511 21:05:26.470883 257 log.go:172] (0xc000114bb0) Reply frame received for 5\nI0511 21:05:26.553551 257 log.go:172] (0xc000114bb0) Data frame received for 3\nI0511 21:05:26.553595 257 log.go:172] (0xc000ad4000) (3) Data frame handling\nI0511 21:05:26.553632 257 log.go:172] (0xc000ad4000) (3) Data frame sent\nI0511 21:05:26.553893 257 log.go:172] (0xc000114bb0) Data frame received for 5\nI0511 21:05:26.553918 257 log.go:172] (0xc000114bb0) Data frame received for 3\nI0511 21:05:26.553945 257 log.go:172] (0xc000ad4000) (3) Data frame handling\nI0511 21:05:26.553966 257 log.go:172] (0xc00024a000) (5) Data frame handling\nI0511 21:05:26.553980 257 log.go:172] (0xc00024a000) (5) Data frame sent\nI0511 21:05:26.553989 257 log.go:172] (0xc000114bb0) Data frame received for 5\nI0511 21:05:26.553998 257 log.go:172] (0xc00024a000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0511 21:05:26.555594 257 log.go:172] (0xc000114bb0) Data frame received for 1\nI0511 21:05:26.555623 257 log.go:172] (0xc00069fae0) (1) Data frame handling\nI0511 21:05:26.555636 257 log.go:172] (0xc00069fae0) (1) Data frame sent\nI0511 21:05:26.555650 257 log.go:172] (0xc000114bb0) (0xc00069fae0) Stream removed, broadcasting: 1\nI0511 21:05:26.555678 257 log.go:172] (0xc000114bb0) Go away received\nI0511 21:05:26.555996 257 log.go:172] (0xc000114bb0) (0xc00069fae0) Stream removed, broadcasting: 1\nI0511 21:05:26.556019 257 log.go:172] (0xc000114bb0) (0xc000ad4000) Stream removed, broadcasting: 3\nI0511 21:05:26.556029 257 log.go:172] (0xc000114bb0) (0xc00024a000) Stream removed, broadcasting: 5\n" May 11 21:05:26.561: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 11 21:05:26.561: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 11 21:05:26.565: INFO: Found 1 stateful pods, waiting for 3 May 11 21:05:36.721: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 11 21:05:36.721: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 11 21:05:36.721: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false May 11 21:05:46.570: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 11 21:05:46.570: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 11 21:05:46.570: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod May 11 21:05:46.576: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3825 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 11 21:05:46.802: INFO: stderr: "I0511 21:05:46.711531 277 log.go:172] (0xc0009bc2c0) (0xc000711e00) Create stream\nI0511 21:05:46.711608 277 log.go:172] (0xc0009bc2c0) (0xc000711e00) Stream added, broadcasting: 1\nI0511 21:05:46.714862 277 log.go:172] (0xc0009bc2c0) Reply frame received for 1\nI0511 21:05:46.714900 277 log.go:172] (0xc0009bc2c0) (0xc0008f6a00) Create stream\nI0511 21:05:46.714913 277 log.go:172] (0xc0009bc2c0) (0xc0008f6a00) Stream added, broadcasting: 3\nI0511 21:05:46.715825 277 log.go:172] (0xc0009bc2c0) Reply frame received for 3\nI0511 21:05:46.715854 277 log.go:172] (0xc0009bc2c0) (0xc0008f6aa0) Create stream\nI0511 21:05:46.715868 277 log.go:172] (0xc0009bc2c0) (0xc0008f6aa0) Stream added, broadcasting: 5\nI0511 21:05:46.716707 277 log.go:172] (0xc0009bc2c0) Reply frame received for 5\nI0511 21:05:46.797104 277 log.go:172] (0xc0009bc2c0) Data frame received for 5\nI0511 21:05:46.797312 277 log.go:172] (0xc0008f6aa0) (5) Data frame handling\nI0511 21:05:46.797328 277 log.go:172] (0xc0008f6aa0) (5) Data frame sent\nI0511 21:05:46.797340 277 log.go:172] (0xc0009bc2c0) Data frame received for 5\nI0511 21:05:46.797357 277 log.go:172] (0xc0008f6aa0) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0511 21:05:46.797395 277 log.go:172] (0xc0009bc2c0) Data frame received for 3\nI0511 21:05:46.797438 277 log.go:172] (0xc0008f6a00) (3) Data frame handling\nI0511 21:05:46.797479 277 log.go:172] (0xc0008f6a00) (3) Data frame sent\nI0511 21:05:46.797501 277 log.go:172] (0xc0009bc2c0) Data frame received for 3\nI0511 21:05:46.797522 277 log.go:172] (0xc0008f6a00) (3) Data frame handling\nI0511 21:05:46.798623 277 log.go:172] (0xc0009bc2c0) Data frame received for 1\nI0511 21:05:46.798642 277 log.go:172] (0xc000711e00) (1) Data frame handling\nI0511 21:05:46.798656 277 log.go:172] (0xc000711e00) (1) Data frame sent\nI0511 21:05:46.798667 277 log.go:172] (0xc0009bc2c0) (0xc000711e00) Stream removed, broadcasting: 1\nI0511 21:05:46.798812 277 log.go:172] (0xc0009bc2c0) Go away received\nI0511 21:05:46.798954 277 log.go:172] (0xc0009bc2c0) (0xc000711e00) Stream removed, broadcasting: 1\nI0511 21:05:46.798970 277 log.go:172] (0xc0009bc2c0) (0xc0008f6a00) Stream removed, broadcasting: 3\nI0511 21:05:46.798978 277 log.go:172] (0xc0009bc2c0) (0xc0008f6aa0) Stream removed, broadcasting: 5\n" May 11 21:05:46.802: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 11 21:05:46.802: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 11 21:05:46.802: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3825 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 11 21:05:47.483: INFO: stderr: "I0511 21:05:47.079567 296 log.go:172] (0xc000104580) (0xc0003974a0) Create stream\nI0511 21:05:47.079627 296 log.go:172] (0xc000104580) (0xc0003974a0) Stream added, broadcasting: 1\nI0511 21:05:47.082286 296 log.go:172] (0xc000104580) Reply frame received for 1\nI0511 21:05:47.082336 296 log.go:172] (0xc000104580) (0xc000397540) Create stream\nI0511 21:05:47.082351 296 log.go:172] (0xc000104580) (0xc000397540) Stream added, broadcasting: 3\nI0511 21:05:47.083274 296 log.go:172] (0xc000104580) Reply frame received for 3\nI0511 21:05:47.083300 296 log.go:172] (0xc000104580) (0xc000679ae0) Create stream\nI0511 21:05:47.083308 296 log.go:172] (0xc000104580) (0xc000679ae0) Stream added, broadcasting: 5\nI0511 21:05:47.084171 296 log.go:172] (0xc000104580) Reply frame received for 5\nI0511 21:05:47.143389 296 log.go:172] (0xc000104580) Data frame received for 5\nI0511 21:05:47.143437 296 log.go:172] (0xc000679ae0) (5) Data frame handling\nI0511 21:05:47.143467 296 log.go:172] (0xc000679ae0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0511 21:05:47.477892 296 log.go:172] (0xc000104580) Data frame received for 3\nI0511 21:05:47.477914 296 log.go:172] (0xc000397540) (3) Data frame handling\nI0511 21:05:47.477927 296 log.go:172] (0xc000397540) (3) Data frame sent\nI0511 21:05:47.477933 296 log.go:172] (0xc000104580) Data frame received for 3\nI0511 21:05:47.477943 296 log.go:172] (0xc000397540) (3) Data frame handling\nI0511 21:05:47.477984 296 log.go:172] (0xc000104580) Data frame received for 5\nI0511 21:05:47.477993 296 log.go:172] (0xc000679ae0) (5) Data frame handling\nI0511 21:05:47.479385 296 log.go:172] (0xc000104580) Data frame received for 1\nI0511 21:05:47.479397 296 log.go:172] (0xc0003974a0) (1) Data frame handling\nI0511 21:05:47.479409 296 log.go:172] (0xc0003974a0) (1) Data frame sent\nI0511 21:05:47.479520 296 log.go:172] (0xc000104580) (0xc0003974a0) Stream removed, broadcasting: 1\nI0511 21:05:47.479550 296 log.go:172] (0xc000104580) Go away received\nI0511 21:05:47.479785 296 log.go:172] (0xc000104580) (0xc0003974a0) Stream removed, broadcasting: 1\nI0511 21:05:47.479801 296 log.go:172] (0xc000104580) (0xc000397540) Stream removed, broadcasting: 3\nI0511 21:05:47.479811 296 log.go:172] (0xc000104580) (0xc000679ae0) Stream removed, broadcasting: 5\n" May 11 21:05:47.483: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 11 21:05:47.483: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 11 21:05:47.483: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3825 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 11 21:05:48.469: INFO: stderr: "I0511 21:05:47.792566 316 log.go:172] (0xc00092cb00) (0xc0005d7ea0) Create stream\nI0511 21:05:47.792634 316 log.go:172] (0xc00092cb00) (0xc0005d7ea0) Stream added, broadcasting: 1\nI0511 21:05:47.794867 316 log.go:172] (0xc00092cb00) Reply frame received for 1\nI0511 21:05:47.794913 316 log.go:172] (0xc00092cb00) (0xc000792000) Create stream\nI0511 21:05:47.794932 316 log.go:172] (0xc00092cb00) (0xc000792000) Stream added, broadcasting: 3\nI0511 21:05:47.795530 316 log.go:172] (0xc00092cb00) Reply frame received for 3\nI0511 21:05:47.795556 316 log.go:172] (0xc00092cb00) (0xc0005d7f40) Create stream\nI0511 21:05:47.795571 316 log.go:172] (0xc00092cb00) (0xc0005d7f40) Stream added, broadcasting: 5\nI0511 21:05:47.796204 316 log.go:172] (0xc00092cb00) Reply frame received for 5\nI0511 21:05:47.860470 316 log.go:172] (0xc00092cb00) Data frame received for 5\nI0511 21:05:47.860486 316 log.go:172] (0xc0005d7f40) (5) Data frame handling\nI0511 21:05:47.860499 316 log.go:172] (0xc0005d7f40) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0511 21:05:48.463204 316 log.go:172] (0xc00092cb00) Data frame received for 3\nI0511 21:05:48.463232 316 log.go:172] (0xc000792000) (3) Data frame handling\nI0511 21:05:48.463244 316 log.go:172] (0xc000792000) (3) Data frame sent\nI0511 21:05:48.463343 316 log.go:172] (0xc00092cb00) Data frame received for 3\nI0511 21:05:48.463405 316 log.go:172] (0xc000792000) (3) Data frame handling\nI0511 21:05:48.463573 316 log.go:172] (0xc00092cb00) Data frame received for 5\nI0511 21:05:48.463584 316 log.go:172] (0xc0005d7f40) (5) Data frame handling\nI0511 21:05:48.465018 316 log.go:172] (0xc00092cb00) Data frame received for 1\nI0511 21:05:48.465062 316 log.go:172] (0xc0005d7ea0) (1) Data frame handling\nI0511 21:05:48.465100 316 log.go:172] (0xc0005d7ea0) (1) Data frame sent\nI0511 21:05:48.465231 316 log.go:172] (0xc00092cb00) (0xc0005d7ea0) Stream removed, broadcasting: 1\nI0511 21:05:48.465245 316 log.go:172] (0xc00092cb00) Go away received\nI0511 21:05:48.465694 316 log.go:172] (0xc00092cb00) (0xc0005d7ea0) Stream removed, broadcasting: 1\nI0511 21:05:48.465742 316 log.go:172] (0xc00092cb00) (0xc000792000) Stream removed, broadcasting: 3\nI0511 21:05:48.465766 316 log.go:172] (0xc00092cb00) (0xc0005d7f40) Stream removed, broadcasting: 5\n" May 11 21:05:48.469: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 11 21:05:48.469: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 11 21:05:48.469: INFO: Waiting for statefulset status.replicas updated to 0 May 11 21:05:48.579: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 May 11 21:05:58.652: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 11 21:05:58.652: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 11 21:05:58.652: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 11 21:05:58.895: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999545s May 11 21:05:59.898: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.793831119s May 11 21:06:00.903: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.790750378s May 11 21:06:01.926: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.786061108s May 11 21:06:02.931: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.763222201s May 11 21:06:04.483: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.758390103s May 11 21:06:05.871: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.206139984s May 11 21:06:06.882: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.817688442s May 11 21:06:07.886: INFO: Verifying statefulset ss doesn't scale past 3 for another 806.486011ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-3825 May 11 21:06:08.892: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3825 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 11 21:06:09.227: INFO: stderr: "I0511 21:06:09.123105 332 log.go:172] (0xc0009cc0b0) (0xc0005335e0) Create stream\nI0511 21:06:09.123157 332 log.go:172] (0xc0009cc0b0) (0xc0005335e0) Stream added, broadcasting: 1\nI0511 21:06:09.126776 332 log.go:172] (0xc0009cc0b0) Reply frame received for 1\nI0511 21:06:09.126801 332 log.go:172] (0xc0009cc0b0) (0xc00090c000) Create stream\nI0511 21:06:09.126808 332 log.go:172] (0xc0009cc0b0) (0xc00090c000) Stream added, broadcasting: 3\nI0511 21:06:09.127759 332 log.go:172] (0xc0009cc0b0) Reply frame received for 3\nI0511 21:06:09.127800 332 log.go:172] (0xc0009cc0b0) (0xc000a94000) Create stream\nI0511 21:06:09.127816 332 log.go:172] (0xc0009cc0b0) (0xc000a94000) Stream added, broadcasting: 5\nI0511 21:06:09.128661 332 log.go:172] (0xc0009cc0b0) Reply frame received for 5\nI0511 21:06:09.222228 332 log.go:172] (0xc0009cc0b0) Data frame received for 5\nI0511 21:06:09.222246 332 log.go:172] (0xc000a94000) (5) Data frame handling\nI0511 21:06:09.222255 332 log.go:172] (0xc000a94000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0511 21:06:09.222272 332 log.go:172] (0xc0009cc0b0) Data frame received for 3\nI0511 21:06:09.222280 332 log.go:172] (0xc00090c000) (3) Data frame handling\nI0511 21:06:09.222287 332 log.go:172] (0xc00090c000) (3) Data frame sent\nI0511 21:06:09.222481 332 log.go:172] (0xc0009cc0b0) Data frame received for 5\nI0511 21:06:09.222491 332 log.go:172] (0xc000a94000) (5) Data frame handling\nI0511 21:06:09.222506 332 log.go:172] (0xc0009cc0b0) Data frame received for 3\nI0511 21:06:09.222512 332 log.go:172] (0xc00090c000) (3) Data frame handling\nI0511 21:06:09.223741 332 log.go:172] (0xc0009cc0b0) Data frame received for 1\nI0511 21:06:09.223769 332 log.go:172] (0xc0005335e0) (1) Data frame handling\nI0511 21:06:09.223787 332 log.go:172] (0xc0005335e0) (1) Data frame sent\nI0511 21:06:09.223807 332 log.go:172] (0xc0009cc0b0) (0xc0005335e0) Stream removed, broadcasting: 1\nI0511 21:06:09.223830 332 log.go:172] (0xc0009cc0b0) Go away received\nI0511 21:06:09.224138 332 log.go:172] (0xc0009cc0b0) (0xc0005335e0) Stream removed, broadcasting: 1\nI0511 21:06:09.224152 332 log.go:172] (0xc0009cc0b0) (0xc00090c000) Stream removed, broadcasting: 3\nI0511 21:06:09.224159 332 log.go:172] (0xc0009cc0b0) (0xc000a94000) Stream removed, broadcasting: 5\n" May 11 21:06:09.227: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 11 21:06:09.227: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 11 21:06:09.227: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3825 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 11 21:06:09.410: INFO: stderr: "I0511 21:06:09.342712 352 log.go:172] (0xc000a880b0) (0xc000b08280) Create stream\nI0511 21:06:09.342761 352 log.go:172] (0xc000a880b0) (0xc000b08280) Stream added, broadcasting: 1\nI0511 21:06:09.345621 352 log.go:172] (0xc000a880b0) Reply frame received for 1\nI0511 21:06:09.345660 352 log.go:172] (0xc000a880b0) (0xc000b08320) Create stream\nI0511 21:06:09.345679 352 log.go:172] (0xc000a880b0) (0xc000b08320) Stream added, broadcasting: 3\nI0511 21:06:09.346613 352 log.go:172] (0xc000a880b0) Reply frame received for 3\nI0511 21:06:09.346646 352 log.go:172] (0xc000a880b0) (0xc000b083c0) Create stream\nI0511 21:06:09.346658 352 log.go:172] (0xc000a880b0) (0xc000b083c0) Stream added, broadcasting: 5\nI0511 21:06:09.347613 352 log.go:172] (0xc000a880b0) Reply frame received for 5\nI0511 21:06:09.402510 352 log.go:172] (0xc000a880b0) Data frame received for 3\nI0511 21:06:09.402540 352 log.go:172] (0xc000b08320) (3) Data frame handling\nI0511 21:06:09.402552 352 log.go:172] (0xc000b08320) (3) Data frame sent\nI0511 21:06:09.402568 352 log.go:172] (0xc000a880b0) Data frame received for 3\nI0511 21:06:09.402589 352 log.go:172] (0xc000b08320) (3) Data frame handling\nI0511 21:06:09.402751 352 log.go:172] (0xc000a880b0) Data frame received for 5\nI0511 21:06:09.402762 352 log.go:172] (0xc000b083c0) (5) Data frame handling\nI0511 21:06:09.402770 352 log.go:172] (0xc000b083c0) (5) Data frame sent\nI0511 21:06:09.402777 352 log.go:172] (0xc000a880b0) Data frame received for 5\nI0511 21:06:09.402782 352 log.go:172] (0xc000b083c0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0511 21:06:09.403844 352 log.go:172] (0xc000a880b0) Data frame received for 1\nI0511 21:06:09.403875 352 log.go:172] (0xc000b08280) (1) Data frame handling\nI0511 21:06:09.403895 352 log.go:172] (0xc000b08280) (1) Data frame sent\nI0511 21:06:09.403925 352 log.go:172] (0xc000a880b0) (0xc000b08280) Stream removed, broadcasting: 1\nI0511 21:06:09.404493 352 log.go:172] (0xc000a880b0) Go away received\nI0511 21:06:09.405910 352 log.go:172] (0xc000a880b0) (0xc000b08280) Stream removed, broadcasting: 1\nI0511 21:06:09.405931 352 log.go:172] (0xc000a880b0) (0xc000b08320) Stream removed, broadcasting: 3\nI0511 21:06:09.405943 352 log.go:172] (0xc000a880b0) (0xc000b083c0) Stream removed, broadcasting: 5\n" May 11 21:06:09.410: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 11 21:06:09.410: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 11 21:06:09.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3825 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 11 21:06:09.659: INFO: stderr: "I0511 21:06:09.574333 372 log.go:172] (0xc0009bc000) (0xc00096a000) Create stream\nI0511 21:06:09.574388 372 log.go:172] (0xc0009bc000) (0xc00096a000) Stream added, broadcasting: 1\nI0511 21:06:09.577797 372 log.go:172] (0xc0009bc000) Reply frame received for 1\nI0511 21:06:09.577866 372 log.go:172] (0xc0009bc000) (0xc0009ae000) Create stream\nI0511 21:06:09.577887 372 log.go:172] (0xc0009bc000) (0xc0009ae000) Stream added, broadcasting: 3\nI0511 21:06:09.579875 372 log.go:172] (0xc0009bc000) Reply frame received for 3\nI0511 21:06:09.579939 372 log.go:172] (0xc0009bc000) (0xc0008f8000) Create stream\nI0511 21:06:09.579962 372 log.go:172] (0xc0009bc000) (0xc0008f8000) Stream added, broadcasting: 5\nI0511 21:06:09.581603 372 log.go:172] (0xc0009bc000) Reply frame received for 5\nI0511 21:06:09.648839 372 log.go:172] (0xc0009bc000) Data frame received for 5\nI0511 21:06:09.648870 372 log.go:172] (0xc0008f8000) (5) Data frame handling\nI0511 21:06:09.648885 372 log.go:172] (0xc0008f8000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0511 21:06:09.652729 372 log.go:172] (0xc0009bc000) Data frame received for 3\nI0511 21:06:09.652746 372 log.go:172] (0xc0009ae000) (3) Data frame handling\nI0511 21:06:09.652762 372 log.go:172] (0xc0009ae000) (3) Data frame sent\nI0511 21:06:09.652923 372 log.go:172] (0xc0009bc000) Data frame received for 3\nI0511 21:06:09.652937 372 log.go:172] (0xc0009ae000) (3) Data frame handling\nI0511 21:06:09.653422 372 log.go:172] (0xc0009bc000) Data frame received for 5\nI0511 21:06:09.653437 372 log.go:172] (0xc0008f8000) (5) Data frame handling\nI0511 21:06:09.655114 372 log.go:172] (0xc0009bc000) Data frame received for 1\nI0511 21:06:09.655130 372 log.go:172] (0xc00096a000) (1) Data frame handling\nI0511 21:06:09.655150 372 log.go:172] (0xc00096a000) (1) Data frame sent\nI0511 21:06:09.655236 372 log.go:172] (0xc0009bc000) (0xc00096a000) Stream removed, broadcasting: 1\nI0511 21:06:09.655462 372 log.go:172] (0xc0009bc000) Go away received\nI0511 21:06:09.655535 372 log.go:172] (0xc0009bc000) (0xc00096a000) Stream removed, broadcasting: 1\nI0511 21:06:09.655560 372 log.go:172] (0xc0009bc000) (0xc0009ae000) Stream removed, broadcasting: 3\nI0511 21:06:09.655570 372 log.go:172] (0xc0009bc000) (0xc0008f8000) Stream removed, broadcasting: 5\n" May 11 21:06:09.659: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 11 21:06:09.659: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 11 21:06:09.659: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 May 11 21:06:39.674: INFO: Deleting all statefulset in ns statefulset-3825 May 11 21:06:39.704: INFO: Scaling statefulset ss to 0 May 11 21:06:39.709: INFO: Waiting for statefulset status.replicas updated to 0 May 11 21:06:39.710: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:06:39.837: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-3825" for this suite. • [SLOW TEST:106.178 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":278,"completed":69,"skipped":1275,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:06:39.845: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object May 11 21:06:39.957: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-7585 /api/v1/namespaces/watch-7585/configmaps/e2e-watch-test-label-changed c12cfd64-1a95-460d-ab3b-f087c8875250 15351947 0 2020-05-11 21:06:39 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} May 11 21:06:39.957: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-7585 /api/v1/namespaces/watch-7585/configmaps/e2e-watch-test-label-changed c12cfd64-1a95-460d-ab3b-f087c8875250 15351948 0 2020-05-11 21:06:39 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} May 11 21:06:39.957: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-7585 /api/v1/namespaces/watch-7585/configmaps/e2e-watch-test-label-changed c12cfd64-1a95-460d-ab3b-f087c8875250 15351949 0 2020-05-11 21:06:39 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored May 11 21:06:50.439: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-7585 /api/v1/namespaces/watch-7585/configmaps/e2e-watch-test-label-changed c12cfd64-1a95-460d-ab3b-f087c8875250 15352008 0 2020-05-11 21:06:39 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 11 21:06:50.439: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-7585 /api/v1/namespaces/watch-7585/configmaps/e2e-watch-test-label-changed c12cfd64-1a95-460d-ab3b-f087c8875250 15352010 0 2020-05-11 21:06:39 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} May 11 21:06:50.439: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-7585 /api/v1/namespaces/watch-7585/configmaps/e2e-watch-test-label-changed c12cfd64-1a95-460d-ab3b-f087c8875250 15352011 0 2020-05-11 21:06:39 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:06:50.439: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-7585" for this suite. • [SLOW TEST:12.199 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":278,"completed":70,"skipped":1337,"failed":0} SSSS ------------------------------ [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:06:52.045: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1754 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 11 21:06:53.270: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-3278' May 11 21:07:00.981: INFO: stderr: "" May 11 21:07:00.981: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1759 May 11 21:07:01.029: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-3278' May 11 21:07:07.172: INFO: stderr: "" May 11 21:07:07.172: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:07:07.172: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3278" for this suite. • [SLOW TEST:15.673 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1750 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":278,"completed":71,"skipped":1341,"failed":0} SSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:07:07.718: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 11 21:07:08.583: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:07:14.850: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9036" for this suite. • [SLOW TEST:7.137 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":278,"completed":72,"skipped":1346,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:07:14.855: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 11 21:07:15.411: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"46dce73a-741f-4969-857f-5ef8b293742d", Controller:(*bool)(0xc002e8c9a2), BlockOwnerDeletion:(*bool)(0xc002e8c9a3)}} May 11 21:07:15.479: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"a4fcbf99-b820-4a10-b6a6-7f4efe0b88b1", Controller:(*bool)(0xc00319e162), BlockOwnerDeletion:(*bool)(0xc00319e163)}} May 11 21:07:15.560: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"96489877-5060-4b2d-8583-45ed39ca2586", Controller:(*bool)(0xc002e8cb62), BlockOwnerDeletion:(*bool)(0xc002e8cb63)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:07:20.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6336" for this suite. • [SLOW TEST:5.835 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":278,"completed":73,"skipped":1354,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:07:20.690: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 11 21:07:25.257: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 11 21:07:27.577: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724828045, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724828045, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724828045, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724828045, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 21:07:29.584: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724828045, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724828045, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724828045, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724828045, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 21:07:34.209: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724828045, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724828045, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724828045, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724828045, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 11 21:07:36.861: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 11 21:07:36.949: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-1400-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:07:39.325: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1802" for this suite. STEP: Destroying namespace "webhook-1802-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:19.095 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":278,"completed":74,"skipped":1367,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:07:39.786: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: executing a command with run --rm and attach with stdin May 11 21:07:39.939: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-2184 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' May 11 21:07:45.406: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0511 21:07:45.294733 447 log.go:172] (0xc0001174a0) (0xc0004b41e0) Create stream\nI0511 21:07:45.294774 447 log.go:172] (0xc0001174a0) (0xc0004b41e0) Stream added, broadcasting: 1\nI0511 21:07:45.296841 447 log.go:172] (0xc0001174a0) Reply frame received for 1\nI0511 21:07:45.296876 447 log.go:172] (0xc0001174a0) (0xc0004b4280) Create stream\nI0511 21:07:45.296887 447 log.go:172] (0xc0001174a0) (0xc0004b4280) Stream added, broadcasting: 3\nI0511 21:07:45.297739 447 log.go:172] (0xc0001174a0) Reply frame received for 3\nI0511 21:07:45.297765 447 log.go:172] (0xc0001174a0) (0xc0009b80a0) Create stream\nI0511 21:07:45.297776 447 log.go:172] (0xc0001174a0) (0xc0009b80a0) Stream added, broadcasting: 5\nI0511 21:07:45.298533 447 log.go:172] (0xc0001174a0) Reply frame received for 5\nI0511 21:07:45.298554 447 log.go:172] (0xc0001174a0) (0xc0009b8140) Create stream\nI0511 21:07:45.298565 447 log.go:172] (0xc0001174a0) (0xc0009b8140) Stream added, broadcasting: 7\nI0511 21:07:45.299349 447 log.go:172] (0xc0001174a0) Reply frame received for 7\nI0511 21:07:45.299492 447 log.go:172] (0xc0004b4280) (3) Writing data frame\nI0511 21:07:45.299653 447 log.go:172] (0xc0004b4280) (3) Writing data frame\nI0511 21:07:45.300352 447 log.go:172] (0xc0001174a0) Data frame received for 5\nI0511 21:07:45.300369 447 log.go:172] (0xc0009b80a0) (5) Data frame handling\nI0511 21:07:45.300380 447 log.go:172] (0xc0009b80a0) (5) Data frame sent\nI0511 21:07:45.300954 447 log.go:172] (0xc0001174a0) Data frame received for 5\nI0511 21:07:45.300966 447 log.go:172] (0xc0009b80a0) (5) Data frame handling\nI0511 21:07:45.300984 447 log.go:172] (0xc0009b80a0) (5) Data frame sent\nI0511 21:07:45.347537 447 log.go:172] (0xc0001174a0) Data frame received for 7\nI0511 21:07:45.347571 447 log.go:172] (0xc0009b8140) (7) Data frame handling\nI0511 21:07:45.347593 447 log.go:172] (0xc0001174a0) Data frame received for 5\nI0511 21:07:45.347607 447 log.go:172] (0xc0009b80a0) (5) Data frame handling\nI0511 21:07:45.347747 447 log.go:172] (0xc0001174a0) Data frame received for 1\nI0511 21:07:45.347757 447 log.go:172] (0xc0004b41e0) (1) Data frame handling\nI0511 21:07:45.347762 447 log.go:172] (0xc0004b41e0) (1) Data frame sent\nI0511 21:07:45.347768 447 log.go:172] (0xc0001174a0) (0xc0004b41e0) Stream removed, broadcasting: 1\nI0511 21:07:45.347872 447 log.go:172] (0xc0001174a0) (0xc0004b4280) Stream removed, broadcasting: 3\nI0511 21:07:45.347932 447 log.go:172] (0xc0001174a0) Go away received\nI0511 21:07:45.347963 447 log.go:172] (0xc0001174a0) (0xc0004b41e0) Stream removed, broadcasting: 1\nI0511 21:07:45.347975 447 log.go:172] (0xc0001174a0) (0xc0004b4280) Stream removed, broadcasting: 3\nI0511 21:07:45.347984 447 log.go:172] (0xc0001174a0) (0xc0009b80a0) Stream removed, broadcasting: 5\nI0511 21:07:45.347993 447 log.go:172] (0xc0001174a0) (0xc0009b8140) Stream removed, broadcasting: 7\n" May 11 21:07:45.406: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:07:47.781: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2184" for this suite. • [SLOW TEST:8.002 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1837 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job [Conformance]","total":278,"completed":75,"skipped":1391,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:07:47.788: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:08:00.993: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2009" for this suite. • [SLOW TEST:13.212 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":278,"completed":76,"skipped":1400,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:08:01.000: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-84b41bef-99cd-487a-a481-3a8f0279cdd6 STEP: Creating a pod to test consume configMaps May 11 21:08:01.096: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-fc352363-b310-40d0-b6da-f3ad2b5b43ea" in namespace "projected-4795" to be "success or failure" May 11 21:08:01.115: INFO: Pod "pod-projected-configmaps-fc352363-b310-40d0-b6da-f3ad2b5b43ea": Phase="Pending", Reason="", readiness=false. Elapsed: 18.037639ms May 11 21:08:03.573: INFO: Pod "pod-projected-configmaps-fc352363-b310-40d0-b6da-f3ad2b5b43ea": Phase="Pending", Reason="", readiness=false. Elapsed: 2.47670071s May 11 21:08:05.603: INFO: Pod "pod-projected-configmaps-fc352363-b310-40d0-b6da-f3ad2b5b43ea": Phase="Running", Reason="", readiness=true. Elapsed: 4.50606656s May 11 21:08:07.606: INFO: Pod "pod-projected-configmaps-fc352363-b310-40d0-b6da-f3ad2b5b43ea": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.509559861s STEP: Saw pod success May 11 21:08:07.606: INFO: Pod "pod-projected-configmaps-fc352363-b310-40d0-b6da-f3ad2b5b43ea" satisfied condition "success or failure" May 11 21:08:07.609: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-fc352363-b310-40d0-b6da-f3ad2b5b43ea container projected-configmap-volume-test: STEP: delete the pod May 11 21:08:07.631: INFO: Waiting for pod pod-projected-configmaps-fc352363-b310-40d0-b6da-f3ad2b5b43ea to disappear May 11 21:08:07.635: INFO: Pod pod-projected-configmaps-fc352363-b310-40d0-b6da-f3ad2b5b43ea no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:08:07.635: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4795" for this suite. • [SLOW TEST:6.641 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":77,"skipped":1429,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:08:07.642: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 11 21:08:07.752: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f9558b48-6852-4984-b940-87123686607f" in namespace "projected-7642" to be "success or failure" May 11 21:08:07.761: INFO: Pod "downwardapi-volume-f9558b48-6852-4984-b940-87123686607f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.96108ms May 11 21:08:09.868: INFO: Pod "downwardapi-volume-f9558b48-6852-4984-b940-87123686607f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.115783512s May 11 21:08:11.872: INFO: Pod "downwardapi-volume-f9558b48-6852-4984-b940-87123686607f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.120041498s STEP: Saw pod success May 11 21:08:11.872: INFO: Pod "downwardapi-volume-f9558b48-6852-4984-b940-87123686607f" satisfied condition "success or failure" May 11 21:08:11.876: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-f9558b48-6852-4984-b940-87123686607f container client-container: STEP: delete the pod May 11 21:08:11.923: INFO: Waiting for pod downwardapi-volume-f9558b48-6852-4984-b940-87123686607f to disappear May 11 21:08:11.948: INFO: Pod downwardapi-volume-f9558b48-6852-4984-b940-87123686607f no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:08:11.948: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7642" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":78,"skipped":1449,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:08:11.956: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-map-e93e4a45-0549-484a-af07-8309a5e999b5 STEP: Creating a pod to test consume secrets May 11 21:08:12.192: INFO: Waiting up to 5m0s for pod "pod-secrets-c05e304e-4dc6-4112-bb1a-f9773a539233" in namespace "secrets-4486" to be "success or failure" May 11 21:08:12.334: INFO: Pod "pod-secrets-c05e304e-4dc6-4112-bb1a-f9773a539233": Phase="Pending", Reason="", readiness=false. Elapsed: 142.707428ms May 11 21:08:14.478: INFO: Pod "pod-secrets-c05e304e-4dc6-4112-bb1a-f9773a539233": Phase="Pending", Reason="", readiness=false. Elapsed: 2.286857083s May 11 21:08:16.483: INFO: Pod "pod-secrets-c05e304e-4dc6-4112-bb1a-f9773a539233": Phase="Pending", Reason="", readiness=false. Elapsed: 4.291211944s May 11 21:08:18.880: INFO: Pod "pod-secrets-c05e304e-4dc6-4112-bb1a-f9773a539233": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.688448829s STEP: Saw pod success May 11 21:08:18.880: INFO: Pod "pod-secrets-c05e304e-4dc6-4112-bb1a-f9773a539233" satisfied condition "success or failure" May 11 21:08:19.309: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-c05e304e-4dc6-4112-bb1a-f9773a539233 container secret-volume-test: STEP: delete the pod May 11 21:08:19.585: INFO: Waiting for pod pod-secrets-c05e304e-4dc6-4112-bb1a-f9773a539233 to disappear May 11 21:08:19.866: INFO: Pod pod-secrets-c05e304e-4dc6-4112-bb1a-f9773a539233 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:08:19.866: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4486" for this suite. • [SLOW TEST:7.917 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":79,"skipped":1460,"failed":0} SSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:08:19.874: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-c02f1f01-0710-4577-9915-a87ce835b6ef STEP: Creating a pod to test consume secrets May 11 21:08:23.328: INFO: Waiting up to 5m0s for pod "pod-secrets-59ffb11a-07ca-46f8-9388-235cb6775425" in namespace "secrets-8861" to be "success or failure" May 11 21:08:23.933: INFO: Pod "pod-secrets-59ffb11a-07ca-46f8-9388-235cb6775425": Phase="Pending", Reason="", readiness=false. Elapsed: 605.394635ms May 11 21:08:26.123: INFO: Pod "pod-secrets-59ffb11a-07ca-46f8-9388-235cb6775425": Phase="Pending", Reason="", readiness=false. Elapsed: 2.794852899s May 11 21:08:28.142: INFO: Pod "pod-secrets-59ffb11a-07ca-46f8-9388-235cb6775425": Phase="Pending", Reason="", readiness=false. Elapsed: 4.813870883s May 11 21:08:30.500: INFO: Pod "pod-secrets-59ffb11a-07ca-46f8-9388-235cb6775425": Phase="Pending", Reason="", readiness=false. Elapsed: 7.171524545s May 11 21:08:32.804: INFO: Pod "pod-secrets-59ffb11a-07ca-46f8-9388-235cb6775425": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.476396292s STEP: Saw pod success May 11 21:08:32.804: INFO: Pod "pod-secrets-59ffb11a-07ca-46f8-9388-235cb6775425" satisfied condition "success or failure" May 11 21:08:32.807: INFO: Trying to get logs from node jerma-worker pod pod-secrets-59ffb11a-07ca-46f8-9388-235cb6775425 container secret-volume-test: STEP: delete the pod May 11 21:08:33.756: INFO: Waiting for pod pod-secrets-59ffb11a-07ca-46f8-9388-235cb6775425 to disappear May 11 21:08:34.162: INFO: Pod pod-secrets-59ffb11a-07ca-46f8-9388-235cb6775425 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:08:34.162: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8861" for this suite. STEP: Destroying namespace "secret-namespace-6429" for this suite. • [SLOW TEST:15.084 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":278,"completed":80,"skipped":1464,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:08:34.958: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override all May 11 21:08:35.417: INFO: Waiting up to 5m0s for pod "client-containers-518bf5a6-e278-4a73-95c3-fed7a2940e29" in namespace "containers-3094" to be "success or failure" May 11 21:08:35.446: INFO: Pod "client-containers-518bf5a6-e278-4a73-95c3-fed7a2940e29": Phase="Pending", Reason="", readiness=false. Elapsed: 29.47229ms May 11 21:08:37.469: INFO: Pod "client-containers-518bf5a6-e278-4a73-95c3-fed7a2940e29": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05225351s May 11 21:08:39.872: INFO: Pod "client-containers-518bf5a6-e278-4a73-95c3-fed7a2940e29": Phase="Pending", Reason="", readiness=false. Elapsed: 4.455511746s May 11 21:08:41.880: INFO: Pod "client-containers-518bf5a6-e278-4a73-95c3-fed7a2940e29": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.463311139s STEP: Saw pod success May 11 21:08:41.880: INFO: Pod "client-containers-518bf5a6-e278-4a73-95c3-fed7a2940e29" satisfied condition "success or failure" May 11 21:08:41.883: INFO: Trying to get logs from node jerma-worker2 pod client-containers-518bf5a6-e278-4a73-95c3-fed7a2940e29 container test-container: STEP: delete the pod May 11 21:08:41.988: INFO: Waiting for pod client-containers-518bf5a6-e278-4a73-95c3-fed7a2940e29 to disappear May 11 21:08:42.030: INFO: Pod client-containers-518bf5a6-e278-4a73-95c3-fed7a2940e29 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:08:42.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-3094" for this suite. • [SLOW TEST:7.079 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":278,"completed":81,"skipped":1529,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:08:42.037: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:08:42.145: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-5758" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":278,"completed":82,"skipped":1552,"failed":0} SSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:08:42.324: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name s-test-opt-del-b6dfda93-e22b-4773-be09-761492af6ceb STEP: Creating secret with name s-test-opt-upd-4720ee43-16ff-4577-8493-b5cdaa88744e STEP: Creating the pod STEP: Deleting secret s-test-opt-del-b6dfda93-e22b-4773-be09-761492af6ceb STEP: Updating secret s-test-opt-upd-4720ee43-16ff-4577-8493-b5cdaa88744e STEP: Creating secret with name s-test-opt-create-c45afcb2-494c-4857-8ed0-549308be53cf STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:10:23.918: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5289" for this suite. • [SLOW TEST:101.601 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":83,"skipped":1555,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:10:23.925: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation May 11 21:10:23.976: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation May 11 21:10:35.855: INFO: >>> kubeConfig: /root/.kube/config May 11 21:10:38.823: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:10:49.398: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6908" for this suite. • [SLOW TEST:25.479 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":278,"completed":84,"skipped":1562,"failed":0} SS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:10:49.404: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-configmap-nrs8 STEP: Creating a pod to test atomic-volume-subpath May 11 21:10:49.720: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-nrs8" in namespace "subpath-6216" to be "success or failure" May 11 21:10:49.735: INFO: Pod "pod-subpath-test-configmap-nrs8": Phase="Pending", Reason="", readiness=false. Elapsed: 15.428286ms May 11 21:10:51.767: INFO: Pod "pod-subpath-test-configmap-nrs8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047088528s May 11 21:10:53.771: INFO: Pod "pod-subpath-test-configmap-nrs8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.051324033s May 11 21:10:55.775: INFO: Pod "pod-subpath-test-configmap-nrs8": Phase="Running", Reason="", readiness=true. Elapsed: 6.054695888s May 11 21:10:58.026: INFO: Pod "pod-subpath-test-configmap-nrs8": Phase="Running", Reason="", readiness=true. Elapsed: 8.305924572s May 11 21:11:00.030: INFO: Pod "pod-subpath-test-configmap-nrs8": Phase="Running", Reason="", readiness=true. Elapsed: 10.31005533s May 11 21:11:02.035: INFO: Pod "pod-subpath-test-configmap-nrs8": Phase="Running", Reason="", readiness=true. Elapsed: 12.314678231s May 11 21:11:04.039: INFO: Pod "pod-subpath-test-configmap-nrs8": Phase="Running", Reason="", readiness=true. Elapsed: 14.318967665s May 11 21:11:06.042: INFO: Pod "pod-subpath-test-configmap-nrs8": Phase="Running", Reason="", readiness=true. Elapsed: 16.322118349s May 11 21:11:08.074: INFO: Pod "pod-subpath-test-configmap-nrs8": Phase="Running", Reason="", readiness=true. Elapsed: 18.354419926s May 11 21:11:10.356: INFO: Pod "pod-subpath-test-configmap-nrs8": Phase="Running", Reason="", readiness=true. Elapsed: 20.636266801s May 11 21:11:12.360: INFO: Pod "pod-subpath-test-configmap-nrs8": Phase="Running", Reason="", readiness=true. Elapsed: 22.639872776s May 11 21:11:14.363: INFO: Pod "pod-subpath-test-configmap-nrs8": Phase="Running", Reason="", readiness=true. Elapsed: 24.642660561s May 11 21:11:16.562: INFO: Pod "pod-subpath-test-configmap-nrs8": Phase="Running", Reason="", readiness=true. Elapsed: 26.842342822s May 11 21:11:18.564: INFO: Pod "pod-subpath-test-configmap-nrs8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.844480316s STEP: Saw pod success May 11 21:11:18.564: INFO: Pod "pod-subpath-test-configmap-nrs8" satisfied condition "success or failure" May 11 21:11:18.601: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-configmap-nrs8 container test-container-subpath-configmap-nrs8: STEP: delete the pod May 11 21:11:19.142: INFO: Waiting for pod pod-subpath-test-configmap-nrs8 to disappear May 11 21:11:19.437: INFO: Pod pod-subpath-test-configmap-nrs8 no longer exists STEP: Deleting pod pod-subpath-test-configmap-nrs8 May 11 21:11:19.437: INFO: Deleting pod "pod-subpath-test-configmap-nrs8" in namespace "subpath-6216" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:11:19.524: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-6216" for this suite. • [SLOW TEST:30.316 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":278,"completed":85,"skipped":1564,"failed":0} SSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:11:19.720: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting the proxy server May 11 21:11:20.189: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:11:20.270: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7029" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":278,"completed":86,"skipped":1572,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:11:20.382: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-4ca32714-ee2a-43b7-870e-e1b993c2db70 STEP: Creating a pod to test consume secrets May 11 21:11:20.942: INFO: Waiting up to 5m0s for pod "pod-secrets-88a306dd-df2c-45ba-a0cd-fe4b19b9e8dc" in namespace "secrets-3586" to be "success or failure" May 11 21:11:20.961: INFO: Pod "pod-secrets-88a306dd-df2c-45ba-a0cd-fe4b19b9e8dc": Phase="Pending", Reason="", readiness=false. Elapsed: 18.865817ms May 11 21:11:22.965: INFO: Pod "pod-secrets-88a306dd-df2c-45ba-a0cd-fe4b19b9e8dc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02276312s May 11 21:11:24.982: INFO: Pod "pod-secrets-88a306dd-df2c-45ba-a0cd-fe4b19b9e8dc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.040239401s STEP: Saw pod success May 11 21:11:24.982: INFO: Pod "pod-secrets-88a306dd-df2c-45ba-a0cd-fe4b19b9e8dc" satisfied condition "success or failure" May 11 21:11:25.021: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-88a306dd-df2c-45ba-a0cd-fe4b19b9e8dc container secret-env-test: STEP: delete the pod May 11 21:11:25.201: INFO: Waiting for pod pod-secrets-88a306dd-df2c-45ba-a0cd-fe4b19b9e8dc to disappear May 11 21:11:25.215: INFO: Pod pod-secrets-88a306dd-df2c-45ba-a0cd-fe4b19b9e8dc no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:11:25.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3586" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":278,"completed":87,"skipped":1587,"failed":0} SSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:11:25.225: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-downwardapi-x78t STEP: Creating a pod to test atomic-volume-subpath May 11 21:11:25.324: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-x78t" in namespace "subpath-2511" to be "success or failure" May 11 21:11:25.328: INFO: Pod "pod-subpath-test-downwardapi-x78t": Phase="Pending", Reason="", readiness=false. Elapsed: 4.531055ms May 11 21:11:27.332: INFO: Pod "pod-subpath-test-downwardapi-x78t": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008335059s May 11 21:11:29.337: INFO: Pod "pod-subpath-test-downwardapi-x78t": Phase="Running", Reason="", readiness=true. Elapsed: 4.012647182s May 11 21:11:31.340: INFO: Pod "pod-subpath-test-downwardapi-x78t": Phase="Running", Reason="", readiness=true. Elapsed: 6.016221467s May 11 21:11:33.344: INFO: Pod "pod-subpath-test-downwardapi-x78t": Phase="Running", Reason="", readiness=true. Elapsed: 8.020007531s May 11 21:11:35.348: INFO: Pod "pod-subpath-test-downwardapi-x78t": Phase="Running", Reason="", readiness=true. Elapsed: 10.024441134s May 11 21:11:37.352: INFO: Pod "pod-subpath-test-downwardapi-x78t": Phase="Running", Reason="", readiness=true. Elapsed: 12.028100856s May 11 21:11:39.356: INFO: Pod "pod-subpath-test-downwardapi-x78t": Phase="Running", Reason="", readiness=true. Elapsed: 14.031800984s May 11 21:11:41.360: INFO: Pod "pod-subpath-test-downwardapi-x78t": Phase="Running", Reason="", readiness=true. Elapsed: 16.036172068s May 11 21:11:43.364: INFO: Pod "pod-subpath-test-downwardapi-x78t": Phase="Running", Reason="", readiness=true. Elapsed: 18.039827713s May 11 21:11:45.367: INFO: Pod "pod-subpath-test-downwardapi-x78t": Phase="Running", Reason="", readiness=true. Elapsed: 20.043210847s May 11 21:11:47.371: INFO: Pod "pod-subpath-test-downwardapi-x78t": Phase="Running", Reason="", readiness=true. Elapsed: 22.047310744s May 11 21:11:49.375: INFO: Pod "pod-subpath-test-downwardapi-x78t": Phase="Running", Reason="", readiness=true. Elapsed: 24.050780719s May 11 21:11:51.379: INFO: Pod "pod-subpath-test-downwardapi-x78t": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.05457352s STEP: Saw pod success May 11 21:11:51.379: INFO: Pod "pod-subpath-test-downwardapi-x78t" satisfied condition "success or failure" May 11 21:11:51.381: INFO: Trying to get logs from node jerma-worker2 pod pod-subpath-test-downwardapi-x78t container test-container-subpath-downwardapi-x78t: STEP: delete the pod May 11 21:11:51.406: INFO: Waiting for pod pod-subpath-test-downwardapi-x78t to disappear May 11 21:11:51.416: INFO: Pod pod-subpath-test-downwardapi-x78t no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-x78t May 11 21:11:51.416: INFO: Deleting pod "pod-subpath-test-downwardapi-x78t" in namespace "subpath-2511" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:11:51.417: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-2511" for this suite. • [SLOW TEST:26.197 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":278,"completed":88,"skipped":1594,"failed":0} SSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:11:51.422: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-4242 [It] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating statefulset ss in namespace statefulset-4242 May 11 21:11:52.192: INFO: Found 0 stateful pods, waiting for 1 May 11 21:12:02.195: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 May 11 21:12:02.540: INFO: Deleting all statefulset in ns statefulset-4242 May 11 21:12:02.544: INFO: Scaling statefulset ss to 0 May 11 21:12:22.786: INFO: Waiting for statefulset status.replicas updated to 0 May 11 21:12:22.789: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:12:22.878: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-4242" for this suite. • [SLOW TEST:31.462 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":278,"completed":89,"skipped":1599,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:12:22.884: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 11 21:12:23.182: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:12:30.401: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-8000" for this suite. • [SLOW TEST:7.553 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:47 listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":278,"completed":90,"skipped":1626,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:12:30.439: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: validating api versions May 11 21:12:30.603: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' May 11 21:12:30.884: INFO: stderr: "" May 11 21:12:30.884: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:12:30.884: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2180" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":278,"completed":91,"skipped":1648,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:12:30.910: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on node default medium May 11 21:12:30.968: INFO: Waiting up to 5m0s for pod "pod-3cf7f091-6518-4724-85c0-9c2b4b707599" in namespace "emptydir-8275" to be "success or failure" May 11 21:12:30.984: INFO: Pod "pod-3cf7f091-6518-4724-85c0-9c2b4b707599": Phase="Pending", Reason="", readiness=false. Elapsed: 16.516812ms May 11 21:12:32.989: INFO: Pod "pod-3cf7f091-6518-4724-85c0-9c2b4b707599": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021412046s May 11 21:12:34.992: INFO: Pod "pod-3cf7f091-6518-4724-85c0-9c2b4b707599": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024245772s May 11 21:12:36.996: INFO: Pod "pod-3cf7f091-6518-4724-85c0-9c2b4b707599": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.027879993s STEP: Saw pod success May 11 21:12:36.996: INFO: Pod "pod-3cf7f091-6518-4724-85c0-9c2b4b707599" satisfied condition "success or failure" May 11 21:12:36.998: INFO: Trying to get logs from node jerma-worker2 pod pod-3cf7f091-6518-4724-85c0-9c2b4b707599 container test-container: STEP: delete the pod May 11 21:12:37.069: INFO: Waiting for pod pod-3cf7f091-6518-4724-85c0-9c2b4b707599 to disappear May 11 21:12:37.104: INFO: Pod pod-3cf7f091-6518-4724-85c0-9c2b4b707599 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:12:37.104: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8275" for this suite. • [SLOW TEST:6.203 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":92,"skipped":1657,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:12:37.114: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 11 21:12:37.291: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties May 11 21:12:40.288: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-150 create -f -' May 11 21:12:49.300: INFO: stderr: "" May 11 21:12:49.301: INFO: stdout: "e2e-test-crd-publish-openapi-3569-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" May 11 21:12:49.301: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-150 delete e2e-test-crd-publish-openapi-3569-crds test-cr' May 11 21:12:49.440: INFO: stderr: "" May 11 21:12:49.440: INFO: stdout: "e2e-test-crd-publish-openapi-3569-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" May 11 21:12:49.440: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-150 apply -f -' May 11 21:12:49.826: INFO: stderr: "" May 11 21:12:49.826: INFO: stdout: "e2e-test-crd-publish-openapi-3569-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" May 11 21:12:49.826: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-150 delete e2e-test-crd-publish-openapi-3569-crds test-cr' May 11 21:12:50.028: INFO: stderr: "" May 11 21:12:50.028: INFO: stdout: "e2e-test-crd-publish-openapi-3569-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema May 11 21:12:50.028: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-3569-crds' May 11 21:12:50.326: INFO: stderr: "" May 11 21:12:50.326: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-3569-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:12:53.270: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-150" for this suite. • [SLOW TEST:16.163 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":278,"completed":93,"skipped":1694,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:12:53.278: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods May 11 21:13:02.017: INFO: Successfully updated pod "adopt-release-878f9" STEP: Checking that the Job readopts the Pod May 11 21:13:02.017: INFO: Waiting up to 15m0s for pod "adopt-release-878f9" in namespace "job-8877" to be "adopted" May 11 21:13:02.024: INFO: Pod "adopt-release-878f9": Phase="Running", Reason="", readiness=true. Elapsed: 6.627288ms May 11 21:13:04.028: INFO: Pod "adopt-release-878f9": Phase="Running", Reason="", readiness=true. Elapsed: 2.010624277s May 11 21:13:04.028: INFO: Pod "adopt-release-878f9" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod May 11 21:13:04.769: INFO: Successfully updated pod "adopt-release-878f9" STEP: Checking that the Job releases the Pod May 11 21:13:04.769: INFO: Waiting up to 15m0s for pod "adopt-release-878f9" in namespace "job-8877" to be "released" May 11 21:13:04.797: INFO: Pod "adopt-release-878f9": Phase="Running", Reason="", readiness=true. Elapsed: 27.915783ms May 11 21:13:04.797: INFO: Pod "adopt-release-878f9" satisfied condition "released" [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:13:04.797: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-8877" for this suite. • [SLOW TEST:12.048 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":278,"completed":94,"skipped":1725,"failed":0} SSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:13:05.326: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 May 11 21:13:05.812: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 11 21:13:05.824: INFO: Waiting for terminating namespaces to be deleted... May 11 21:13:05.826: INFO: Logging pods the kubelet thinks is on node jerma-worker before test May 11 21:13:05.839: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 11 21:13:05.839: INFO: Container kindnet-cni ready: true, restart count 0 May 11 21:13:05.839: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 11 21:13:05.839: INFO: Container kube-proxy ready: true, restart count 0 May 11 21:13:05.839: INFO: adopt-release-989jp from job-8877 started at 2020-05-11 21:12:53 +0000 UTC (1 container statuses recorded) May 11 21:13:05.839: INFO: Container c ready: true, restart count 0 May 11 21:13:05.839: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test May 11 21:13:05.844: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 11 21:13:05.844: INFO: Container kindnet-cni ready: true, restart count 0 May 11 21:13:05.844: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) May 11 21:13:05.844: INFO: Container kube-bench ready: false, restart count 0 May 11 21:13:05.844: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 11 21:13:05.844: INFO: Container kube-proxy ready: true, restart count 0 May 11 21:13:05.844: INFO: adopt-release-878f9 from job-8877 started at 2020-05-11 21:12:53 +0000 UTC (1 container statuses recorded) May 11 21:13:05.844: INFO: Container c ready: true, restart count 0 May 11 21:13:05.844: INFO: adopt-release-4p98g from job-8877 started at 2020-05-11 21:13:05 +0000 UTC (1 container statuses recorded) May 11 21:13:05.844: INFO: Container c ready: false, restart count 0 May 11 21:13:05.844: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) May 11 21:13:05.844: INFO: Container kube-hunter ready: false, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-8538e704-3f83-4ce9-86dd-d69936f99abf 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-8538e704-3f83-4ce9-86dd-d69936f99abf off the node jerma-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-8538e704-3f83-4ce9-86dd-d69936f99abf [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:13:23.684: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-704" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:18.367 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":278,"completed":95,"skipped":1732,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:13:23.695: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod May 11 21:13:32.399: INFO: Successfully updated pod "labelsupdatee00b6e77-ba4a-4d1c-9d24-1b4797b5aa0d" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:13:34.775: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7476" for this suite. • [SLOW TEST:11.739 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":96,"skipped":1818,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:13:35.434: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 11 21:13:37.677: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 11 21:13:39.682: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724828417, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724828417, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724828417, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724828417, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 21:13:41.859: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724828417, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724828417, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724828417, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724828417, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 11 21:13:44.980: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 11 21:13:45.440: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:13:47.284: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5897" for this suite. STEP: Destroying namespace "webhook-5897-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:12.220 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":278,"completed":97,"skipped":1848,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:13:47.654: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-projected-sw5b STEP: Creating a pod to test atomic-volume-subpath May 11 21:13:48.047: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-sw5b" in namespace "subpath-167" to be "success or failure" May 11 21:13:48.101: INFO: Pod "pod-subpath-test-projected-sw5b": Phase="Pending", Reason="", readiness=false. Elapsed: 54.580577ms May 11 21:13:50.941: INFO: Pod "pod-subpath-test-projected-sw5b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.894278166s May 11 21:13:53.357: INFO: Pod "pod-subpath-test-projected-sw5b": Phase="Pending", Reason="", readiness=false. Elapsed: 5.310691588s May 11 21:13:55.393: INFO: Pod "pod-subpath-test-projected-sw5b": Phase="Pending", Reason="", readiness=false. Elapsed: 7.345815351s May 11 21:13:57.570: INFO: Pod "pod-subpath-test-projected-sw5b": Phase="Running", Reason="", readiness=true. Elapsed: 9.522950428s May 11 21:13:59.574: INFO: Pod "pod-subpath-test-projected-sw5b": Phase="Running", Reason="", readiness=true. Elapsed: 11.527422506s May 11 21:14:01.662: INFO: Pod "pod-subpath-test-projected-sw5b": Phase="Running", Reason="", readiness=true. Elapsed: 13.615321781s May 11 21:14:03.666: INFO: Pod "pod-subpath-test-projected-sw5b": Phase="Running", Reason="", readiness=true. Elapsed: 15.619103965s May 11 21:14:05.669: INFO: Pod "pod-subpath-test-projected-sw5b": Phase="Running", Reason="", readiness=true. Elapsed: 17.622571667s May 11 21:14:07.673: INFO: Pod "pod-subpath-test-projected-sw5b": Phase="Running", Reason="", readiness=true. Elapsed: 19.626168065s May 11 21:14:09.677: INFO: Pod "pod-subpath-test-projected-sw5b": Phase="Running", Reason="", readiness=true. Elapsed: 21.630185158s May 11 21:14:11.715: INFO: Pod "pod-subpath-test-projected-sw5b": Phase="Running", Reason="", readiness=true. Elapsed: 23.668062555s May 11 21:14:13.722: INFO: Pod "pod-subpath-test-projected-sw5b": Phase="Running", Reason="", readiness=true. Elapsed: 25.675253338s May 11 21:14:15.725: INFO: Pod "pod-subpath-test-projected-sw5b": Phase="Running", Reason="", readiness=true. Elapsed: 27.677892745s May 11 21:14:17.818: INFO: Pod "pod-subpath-test-projected-sw5b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 29.771229453s STEP: Saw pod success May 11 21:14:17.818: INFO: Pod "pod-subpath-test-projected-sw5b" satisfied condition "success or failure" May 11 21:14:17.822: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-projected-sw5b container test-container-subpath-projected-sw5b: STEP: delete the pod May 11 21:14:18.053: INFO: Waiting for pod pod-subpath-test-projected-sw5b to disappear May 11 21:14:18.085: INFO: Pod pod-subpath-test-projected-sw5b no longer exists STEP: Deleting pod pod-subpath-test-projected-sw5b May 11 21:14:18.085: INFO: Deleting pod "pod-subpath-test-projected-sw5b" in namespace "subpath-167" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:14:18.088: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-167" for this suite. • [SLOW TEST:30.440 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":278,"completed":98,"skipped":1868,"failed":0} SSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:14:18.094: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod liveness-2c94a862-9d47-4697-8a7c-9014fc13bcf1 in namespace container-probe-885 May 11 21:14:24.484: INFO: Started pod liveness-2c94a862-9d47-4697-8a7c-9014fc13bcf1 in namespace container-probe-885 STEP: checking the pod's current state and verifying that restartCount is present May 11 21:14:24.496: INFO: Initial restart count of pod liveness-2c94a862-9d47-4697-8a7c-9014fc13bcf1 is 0 May 11 21:14:44.887: INFO: Restart count of pod container-probe-885/liveness-2c94a862-9d47-4697-8a7c-9014fc13bcf1 is now 1 (20.391667349s elapsed) May 11 21:15:05.003: INFO: Restart count of pod container-probe-885/liveness-2c94a862-9d47-4697-8a7c-9014fc13bcf1 is now 2 (40.507663352s elapsed) May 11 21:15:25.236: INFO: Restart count of pod container-probe-885/liveness-2c94a862-9d47-4697-8a7c-9014fc13bcf1 is now 3 (1m0.74061988s elapsed) May 11 21:15:43.948: INFO: Restart count of pod container-probe-885/liveness-2c94a862-9d47-4697-8a7c-9014fc13bcf1 is now 4 (1m19.452056005s elapsed) May 11 21:16:47.943: INFO: Restart count of pod container-probe-885/liveness-2c94a862-9d47-4697-8a7c-9014fc13bcf1 is now 5 (2m23.446915894s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:16:48.064: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-885" for this suite. • [SLOW TEST:150.302 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":278,"completed":99,"skipped":1882,"failed":0} SSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:16:48.396: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:17:06.850: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9132" for this suite. • [SLOW TEST:18.464 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":278,"completed":100,"skipped":1885,"failed":0} SS ------------------------------ [sig-network] Services should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:17:06.861: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service nodeport-test with type=NodePort in namespace services-9466 STEP: creating replication controller nodeport-test in namespace services-9466 I0511 21:17:07.892686 6 runners.go:189] Created replication controller with name: nodeport-test, namespace: services-9466, replica count: 2 I0511 21:17:10.943111 6 runners.go:189] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0511 21:17:13.943310 6 runners.go:189] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 11 21:17:13.943: INFO: Creating new exec pod May 11 21:17:19.356: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-9466 execpodkml4p -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80' May 11 21:17:19.771: INFO: stderr: "I0511 21:17:19.679345 612 log.go:172] (0xc000c21550) (0xc000c1e320) Create stream\nI0511 21:17:19.679406 612 log.go:172] (0xc000c21550) (0xc000c1e320) Stream added, broadcasting: 1\nI0511 21:17:19.681813 612 log.go:172] (0xc000c21550) Reply frame received for 1\nI0511 21:17:19.681849 612 log.go:172] (0xc000c21550) (0xc000c98320) Create stream\nI0511 21:17:19.681861 612 log.go:172] (0xc000c21550) (0xc000c98320) Stream added, broadcasting: 3\nI0511 21:17:19.682555 612 log.go:172] (0xc000c21550) Reply frame received for 3\nI0511 21:17:19.682592 612 log.go:172] (0xc000c21550) (0xc000c1e3c0) Create stream\nI0511 21:17:19.682609 612 log.go:172] (0xc000c21550) (0xc000c1e3c0) Stream added, broadcasting: 5\nI0511 21:17:19.683673 612 log.go:172] (0xc000c21550) Reply frame received for 5\nI0511 21:17:19.765913 612 log.go:172] (0xc000c21550) Data frame received for 3\nI0511 21:17:19.765954 612 log.go:172] (0xc000c98320) (3) Data frame handling\nI0511 21:17:19.765985 612 log.go:172] (0xc000c21550) Data frame received for 5\nI0511 21:17:19.766003 612 log.go:172] (0xc000c1e3c0) (5) Data frame handling\nI0511 21:17:19.766027 612 log.go:172] (0xc000c1e3c0) (5) Data frame sent\nI0511 21:17:19.766042 612 log.go:172] (0xc000c21550) Data frame received for 5\nI0511 21:17:19.766055 612 log.go:172] (0xc000c1e3c0) (5) Data frame handling\n+ nc -zv -t -w 2 nodeport-test 80\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0511 21:17:19.767341 612 log.go:172] (0xc000c21550) Data frame received for 1\nI0511 21:17:19.767356 612 log.go:172] (0xc000c1e320) (1) Data frame handling\nI0511 21:17:19.767366 612 log.go:172] (0xc000c1e320) (1) Data frame sent\nI0511 21:17:19.767375 612 log.go:172] (0xc000c21550) (0xc000c1e320) Stream removed, broadcasting: 1\nI0511 21:17:19.767411 612 log.go:172] (0xc000c21550) Go away received\nI0511 21:17:19.767575 612 log.go:172] (0xc000c21550) (0xc000c1e320) Stream removed, broadcasting: 1\nI0511 21:17:19.767588 612 log.go:172] (0xc000c21550) (0xc000c98320) Stream removed, broadcasting: 3\nI0511 21:17:19.767592 612 log.go:172] (0xc000c21550) (0xc000c1e3c0) Stream removed, broadcasting: 5\n" May 11 21:17:19.771: INFO: stdout: "" May 11 21:17:19.771: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-9466 execpodkml4p -- /bin/sh -x -c nc -zv -t -w 2 10.101.177.223 80' May 11 21:17:19.962: INFO: stderr: "I0511 21:17:19.891136 632 log.go:172] (0xc0001053f0) (0xc000b005a0) Create stream\nI0511 21:17:19.891185 632 log.go:172] (0xc0001053f0) (0xc000b005a0) Stream added, broadcasting: 1\nI0511 21:17:19.894915 632 log.go:172] (0xc0001053f0) Reply frame received for 1\nI0511 21:17:19.894946 632 log.go:172] (0xc0001053f0) (0xc000804640) Create stream\nI0511 21:17:19.894952 632 log.go:172] (0xc0001053f0) (0xc000804640) Stream added, broadcasting: 3\nI0511 21:17:19.895825 632 log.go:172] (0xc0001053f0) Reply frame received for 3\nI0511 21:17:19.895850 632 log.go:172] (0xc0001053f0) (0xc000541400) Create stream\nI0511 21:17:19.895856 632 log.go:172] (0xc0001053f0) (0xc000541400) Stream added, broadcasting: 5\nI0511 21:17:19.896843 632 log.go:172] (0xc0001053f0) Reply frame received for 5\nI0511 21:17:19.957415 632 log.go:172] (0xc0001053f0) Data frame received for 5\nI0511 21:17:19.957438 632 log.go:172] (0xc000541400) (5) Data frame handling\nI0511 21:17:19.957449 632 log.go:172] (0xc000541400) (5) Data frame sent\nI0511 21:17:19.957458 632 log.go:172] (0xc0001053f0) Data frame received for 5\nI0511 21:17:19.957465 632 log.go:172] (0xc000541400) (5) Data frame handling\n+ nc -zv -t -w 2 10.101.177.223 80\nConnection to 10.101.177.223 80 port [tcp/http] succeeded!\nI0511 21:17:19.957487 632 log.go:172] (0xc0001053f0) Data frame received for 3\nI0511 21:17:19.957496 632 log.go:172] (0xc000804640) (3) Data frame handling\nI0511 21:17:19.958696 632 log.go:172] (0xc0001053f0) Data frame received for 1\nI0511 21:17:19.958761 632 log.go:172] (0xc000b005a0) (1) Data frame handling\nI0511 21:17:19.958779 632 log.go:172] (0xc000b005a0) (1) Data frame sent\nI0511 21:17:19.958791 632 log.go:172] (0xc0001053f0) (0xc000b005a0) Stream removed, broadcasting: 1\nI0511 21:17:19.958837 632 log.go:172] (0xc0001053f0) Go away received\nI0511 21:17:19.959040 632 log.go:172] (0xc0001053f0) (0xc000b005a0) Stream removed, broadcasting: 1\nI0511 21:17:19.959083 632 log.go:172] (0xc0001053f0) (0xc000804640) Stream removed, broadcasting: 3\nI0511 21:17:19.959096 632 log.go:172] (0xc0001053f0) (0xc000541400) Stream removed, broadcasting: 5\n" May 11 21:17:19.962: INFO: stdout: "" May 11 21:17:19.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-9466 execpodkml4p -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.10 31925' May 11 21:17:20.158: INFO: stderr: "I0511 21:17:20.089029 652 log.go:172] (0xc000bf8840) (0xc0008f0000) Create stream\nI0511 21:17:20.089080 652 log.go:172] (0xc000bf8840) (0xc0008f0000) Stream added, broadcasting: 1\nI0511 21:17:20.091198 652 log.go:172] (0xc000bf8840) Reply frame received for 1\nI0511 21:17:20.091229 652 log.go:172] (0xc000bf8840) (0xc0008f00a0) Create stream\nI0511 21:17:20.091235 652 log.go:172] (0xc000bf8840) (0xc0008f00a0) Stream added, broadcasting: 3\nI0511 21:17:20.091956 652 log.go:172] (0xc000bf8840) Reply frame received for 3\nI0511 21:17:20.091992 652 log.go:172] (0xc000bf8840) (0xc00071fae0) Create stream\nI0511 21:17:20.092005 652 log.go:172] (0xc000bf8840) (0xc00071fae0) Stream added, broadcasting: 5\nI0511 21:17:20.092748 652 log.go:172] (0xc000bf8840) Reply frame received for 5\nI0511 21:17:20.152725 652 log.go:172] (0xc000bf8840) Data frame received for 5\nI0511 21:17:20.152752 652 log.go:172] (0xc00071fae0) (5) Data frame handling\nI0511 21:17:20.152770 652 log.go:172] (0xc00071fae0) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.10 31925\nI0511 21:17:20.153001 652 log.go:172] (0xc000bf8840) Data frame received for 5\nI0511 21:17:20.153017 652 log.go:172] (0xc00071fae0) (5) Data frame handling\nI0511 21:17:20.153027 652 log.go:172] (0xc00071fae0) (5) Data frame sent\nConnection to 172.17.0.10 31925 port [tcp/31925] succeeded!\nI0511 21:17:20.153492 652 log.go:172] (0xc000bf8840) Data frame received for 5\nI0511 21:17:20.153519 652 log.go:172] (0xc00071fae0) (5) Data frame handling\nI0511 21:17:20.153532 652 log.go:172] (0xc000bf8840) Data frame received for 3\nI0511 21:17:20.153537 652 log.go:172] (0xc0008f00a0) (3) Data frame handling\nI0511 21:17:20.154591 652 log.go:172] (0xc000bf8840) Data frame received for 1\nI0511 21:17:20.154605 652 log.go:172] (0xc0008f0000) (1) Data frame handling\nI0511 21:17:20.154614 652 log.go:172] (0xc0008f0000) (1) Data frame sent\nI0511 21:17:20.154782 652 log.go:172] (0xc000bf8840) (0xc0008f0000) Stream removed, broadcasting: 1\nI0511 21:17:20.154940 652 log.go:172] (0xc000bf8840) Go away received\nI0511 21:17:20.155117 652 log.go:172] (0xc000bf8840) (0xc0008f0000) Stream removed, broadcasting: 1\nI0511 21:17:20.155133 652 log.go:172] (0xc000bf8840) (0xc0008f00a0) Stream removed, broadcasting: 3\nI0511 21:17:20.155142 652 log.go:172] (0xc000bf8840) (0xc00071fae0) Stream removed, broadcasting: 5\n" May 11 21:17:20.158: INFO: stdout: "" May 11 21:17:20.158: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-9466 execpodkml4p -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.8 31925' May 11 21:17:20.349: INFO: stderr: "I0511 21:17:20.283297 673 log.go:172] (0xc0007ba9a0) (0xc000796000) Create stream\nI0511 21:17:20.283337 673 log.go:172] (0xc0007ba9a0) (0xc000796000) Stream added, broadcasting: 1\nI0511 21:17:20.285059 673 log.go:172] (0xc0007ba9a0) Reply frame received for 1\nI0511 21:17:20.285092 673 log.go:172] (0xc0007ba9a0) (0xc00066fae0) Create stream\nI0511 21:17:20.285102 673 log.go:172] (0xc0007ba9a0) (0xc00066fae0) Stream added, broadcasting: 3\nI0511 21:17:20.285919 673 log.go:172] (0xc0007ba9a0) Reply frame received for 3\nI0511 21:17:20.285943 673 log.go:172] (0xc0007ba9a0) (0xc000222000) Create stream\nI0511 21:17:20.285949 673 log.go:172] (0xc0007ba9a0) (0xc000222000) Stream added, broadcasting: 5\nI0511 21:17:20.286898 673 log.go:172] (0xc0007ba9a0) Reply frame received for 5\nI0511 21:17:20.342122 673 log.go:172] (0xc0007ba9a0) Data frame received for 5\nI0511 21:17:20.342154 673 log.go:172] (0xc000222000) (5) Data frame handling\nI0511 21:17:20.342178 673 log.go:172] (0xc000222000) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.8 31925\nI0511 21:17:20.342911 673 log.go:172] (0xc0007ba9a0) Data frame received for 5\nI0511 21:17:20.342933 673 log.go:172] (0xc000222000) (5) Data frame handling\nI0511 21:17:20.342955 673 log.go:172] (0xc000222000) (5) Data frame sent\nConnection to 172.17.0.8 31925 port [tcp/31925] succeeded!\nI0511 21:17:20.343042 673 log.go:172] (0xc0007ba9a0) Data frame received for 5\nI0511 21:17:20.343053 673 log.go:172] (0xc000222000) (5) Data frame handling\nI0511 21:17:20.343272 673 log.go:172] (0xc0007ba9a0) Data frame received for 3\nI0511 21:17:20.343339 673 log.go:172] (0xc00066fae0) (3) Data frame handling\nI0511 21:17:20.344475 673 log.go:172] (0xc0007ba9a0) Data frame received for 1\nI0511 21:17:20.344497 673 log.go:172] (0xc000796000) (1) Data frame handling\nI0511 21:17:20.344529 673 log.go:172] (0xc000796000) (1) Data frame sent\nI0511 21:17:20.344552 673 log.go:172] (0xc0007ba9a0) (0xc000796000) Stream removed, broadcasting: 1\nI0511 21:17:20.344744 673 log.go:172] (0xc0007ba9a0) Go away received\nI0511 21:17:20.344919 673 log.go:172] (0xc0007ba9a0) (0xc000796000) Stream removed, broadcasting: 1\nI0511 21:17:20.344941 673 log.go:172] (0xc0007ba9a0) (0xc00066fae0) Stream removed, broadcasting: 3\nI0511 21:17:20.344950 673 log.go:172] (0xc0007ba9a0) (0xc000222000) Stream removed, broadcasting: 5\n" May 11 21:17:20.349: INFO: stdout: "" [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:17:20.349: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9466" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:13.495 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":278,"completed":101,"skipped":1887,"failed":0} SSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:17:20.356: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-c31c4b25-a6d3-4096-a27f-a5eae0d491c7 STEP: Creating a pod to test consume secrets May 11 21:17:20.461: INFO: Waiting up to 5m0s for pod "pod-secrets-e93a69f0-3219-404d-8f59-8b02129fbabc" in namespace "secrets-1157" to be "success or failure" May 11 21:17:20.480: INFO: Pod "pod-secrets-e93a69f0-3219-404d-8f59-8b02129fbabc": Phase="Pending", Reason="", readiness=false. Elapsed: 19.047542ms May 11 21:17:22.718: INFO: Pod "pod-secrets-e93a69f0-3219-404d-8f59-8b02129fbabc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.256879547s May 11 21:17:24.952: INFO: Pod "pod-secrets-e93a69f0-3219-404d-8f59-8b02129fbabc": Phase="Running", Reason="", readiness=true. Elapsed: 4.491360568s May 11 21:17:26.963: INFO: Pod "pod-secrets-e93a69f0-3219-404d-8f59-8b02129fbabc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.502032427s STEP: Saw pod success May 11 21:17:26.963: INFO: Pod "pod-secrets-e93a69f0-3219-404d-8f59-8b02129fbabc" satisfied condition "success or failure" May 11 21:17:27.180: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-e93a69f0-3219-404d-8f59-8b02129fbabc container secret-volume-test: STEP: delete the pod May 11 21:17:27.630: INFO: Waiting for pod pod-secrets-e93a69f0-3219-404d-8f59-8b02129fbabc to disappear May 11 21:17:27.721: INFO: Pod pod-secrets-e93a69f0-3219-404d-8f59-8b02129fbabc no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:17:27.721: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1157" for this suite. • [SLOW TEST:7.687 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":102,"skipped":1894,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:17:28.044: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: getting the auto-created API token May 11 21:17:29.832: INFO: mount-test service account has no secret references STEP: getting the auto-created API token STEP: reading a file in the container May 11 21:17:38.677: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-4089 pod-service-account-e6f41fb4-68f5-4445-9f73-f34c9af2e327 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container May 11 21:17:39.462: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-4089 pod-service-account-e6f41fb4-68f5-4445-9f73-f34c9af2e327 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container May 11 21:17:39.702: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-4089 pod-service-account-e6f41fb4-68f5-4445-9f73-f34c9af2e327 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:17:39.884: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-4089" for this suite. • [SLOW TEST:11.847 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":278,"completed":103,"skipped":1921,"failed":0} SSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:17:39.891: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod busybox-649dd1a4-e749-47de-8c30-cbd2ddba581a in namespace container-probe-2132 May 11 21:17:48.066: INFO: Started pod busybox-649dd1a4-e749-47de-8c30-cbd2ddba581a in namespace container-probe-2132 STEP: checking the pod's current state and verifying that restartCount is present May 11 21:17:48.068: INFO: Initial restart count of pod busybox-649dd1a4-e749-47de-8c30-cbd2ddba581a is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:21:49.392: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2132" for this suite. • [SLOW TEST:249.699 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":104,"skipped":1927,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:21:49.591: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 11 21:21:55.633: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:21:56.099: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-5187" for this suite. • [SLOW TEST:6.517 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131 should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":278,"completed":105,"skipped":1955,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:21:56.108: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes May 11 21:21:56.713: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice May 11 21:22:07.979: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:22:07.998: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3493" for this suite. • [SLOW TEST:11.898 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":278,"completed":106,"skipped":1966,"failed":0} SS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:22:08.006: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name s-test-opt-del-ac13f570-f039-4778-b131-8bfeae5e1281 STEP: Creating secret with name s-test-opt-upd-94a11cbc-d2ea-4a56-bfa3-4c7cefa120a4 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-ac13f570-f039-4778-b131-8bfeae5e1281 STEP: Updating secret s-test-opt-upd-94a11cbc-d2ea-4a56-bfa3-4c7cefa120a4 STEP: Creating secret with name s-test-opt-create-ba48e9f3-4075-4826-9f41-71ce636523b2 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:22:22.616: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2973" for this suite. • [SLOW TEST:14.616 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":107,"skipped":1968,"failed":0} SSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:22:22.622: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod May 11 21:22:22.717: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:22:38.458: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-9163" for this suite. • [SLOW TEST:16.218 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":278,"completed":108,"skipped":1975,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:22:38.840: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:22:43.635: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-4806" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":109,"skipped":1998,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:22:43.644: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:22:49.733: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-7648" for this suite. • [SLOW TEST:6.106 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":278,"completed":110,"skipped":2007,"failed":0} [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:22:49.750: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 11 21:22:50.109: INFO: Creating ReplicaSet my-hostname-basic-fd2fb77c-2624-47a4-8deb-d1e46630eb93 May 11 21:22:50.274: INFO: Pod name my-hostname-basic-fd2fb77c-2624-47a4-8deb-d1e46630eb93: Found 0 pods out of 1 May 11 21:22:55.278: INFO: Pod name my-hostname-basic-fd2fb77c-2624-47a4-8deb-d1e46630eb93: Found 1 pods out of 1 May 11 21:22:55.278: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-fd2fb77c-2624-47a4-8deb-d1e46630eb93" is running May 11 21:22:57.292: INFO: Pod "my-hostname-basic-fd2fb77c-2624-47a4-8deb-d1e46630eb93-xzm99" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-11 21:22:50 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-11 21:22:50 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-fd2fb77c-2624-47a4-8deb-d1e46630eb93]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-11 21:22:50 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-fd2fb77c-2624-47a4-8deb-d1e46630eb93]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-11 21:22:50 +0000 UTC Reason: Message:}]) May 11 21:22:57.292: INFO: Trying to dial the pod May 11 21:23:02.383: INFO: Controller my-hostname-basic-fd2fb77c-2624-47a4-8deb-d1e46630eb93: Got expected result from replica 1 [my-hostname-basic-fd2fb77c-2624-47a4-8deb-d1e46630eb93-xzm99]: "my-hostname-basic-fd2fb77c-2624-47a4-8deb-d1e46630eb93-xzm99", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:23:02.383: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-1176" for this suite. • [SLOW TEST:12.640 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":278,"completed":111,"skipped":2007,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:23:02.389: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 11 21:23:03.175: INFO: Waiting up to 5m0s for pod "downwardapi-volume-57b9d916-8b17-440a-8f23-dfcbc1e7c132" in namespace "projected-4210" to be "success or failure" May 11 21:23:03.403: INFO: Pod "downwardapi-volume-57b9d916-8b17-440a-8f23-dfcbc1e7c132": Phase="Pending", Reason="", readiness=false. Elapsed: 228.21197ms May 11 21:23:05.436: INFO: Pod "downwardapi-volume-57b9d916-8b17-440a-8f23-dfcbc1e7c132": Phase="Pending", Reason="", readiness=false. Elapsed: 2.260739861s May 11 21:23:07.837: INFO: Pod "downwardapi-volume-57b9d916-8b17-440a-8f23-dfcbc1e7c132": Phase="Pending", Reason="", readiness=false. Elapsed: 4.662301634s May 11 21:23:10.030: INFO: Pod "downwardapi-volume-57b9d916-8b17-440a-8f23-dfcbc1e7c132": Phase="Running", Reason="", readiness=true. Elapsed: 6.854964951s May 11 21:23:12.214: INFO: Pod "downwardapi-volume-57b9d916-8b17-440a-8f23-dfcbc1e7c132": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.03934444s STEP: Saw pod success May 11 21:23:12.214: INFO: Pod "downwardapi-volume-57b9d916-8b17-440a-8f23-dfcbc1e7c132" satisfied condition "success or failure" May 11 21:23:12.217: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-57b9d916-8b17-440a-8f23-dfcbc1e7c132 container client-container: STEP: delete the pod May 11 21:23:12.535: INFO: Waiting for pod downwardapi-volume-57b9d916-8b17-440a-8f23-dfcbc1e7c132 to disappear May 11 21:23:12.609: INFO: Pod downwardapi-volume-57b9d916-8b17-440a-8f23-dfcbc1e7c132 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:23:12.609: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4210" for this suite. • [SLOW TEST:10.247 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":112,"skipped":2020,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:23:12.636: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 11 21:23:13.137: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties May 11 21:23:16.367: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9814 create -f -' May 11 21:23:27.995: INFO: stderr: "" May 11 21:23:27.995: INFO: stdout: "e2e-test-crd-publish-openapi-7241-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" May 11 21:23:27.995: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9814 delete e2e-test-crd-publish-openapi-7241-crds test-cr' May 11 21:23:28.256: INFO: stderr: "" May 11 21:23:28.256: INFO: stdout: "e2e-test-crd-publish-openapi-7241-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" May 11 21:23:28.256: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9814 apply -f -' May 11 21:23:29.093: INFO: stderr: "" May 11 21:23:29.093: INFO: stdout: "e2e-test-crd-publish-openapi-7241-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" May 11 21:23:29.093: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9814 delete e2e-test-crd-publish-openapi-7241-crds test-cr' May 11 21:23:29.559: INFO: stderr: "" May 11 21:23:29.559: INFO: stdout: "e2e-test-crd-publish-openapi-7241-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR May 11 21:23:29.559: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7241-crds' May 11 21:23:30.757: INFO: stderr: "" May 11 21:23:30.757: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-7241-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:23:33.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-9814" for this suite. • [SLOW TEST:20.617 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":278,"completed":113,"skipped":2030,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:23:33.254: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:23:33.502: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2856" for this suite. •{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":278,"completed":114,"skipped":2038,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:23:33.564: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 11 21:23:36.391: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 11 21:23:38.491: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724829016, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724829016, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724829016, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724829016, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 21:23:40.776: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724829016, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724829016, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724829016, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724829016, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 11 21:23:44.461: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:23:44.775: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4255" for this suite. STEP: Destroying namespace "webhook-4255-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:12.355 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":278,"completed":115,"skipped":2045,"failed":0} SSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:23:45.919: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:23:47.810: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-8933" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":278,"completed":116,"skipped":2051,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:23:47.832: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 11 21:23:48.170: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-2734 I0511 21:23:48.273502 6 runners.go:189] Created replication controller with name: svc-latency-rc, namespace: svc-latency-2734, replica count: 1 I0511 21:23:49.323781 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0511 21:23:50.323949 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0511 21:23:51.324127 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0511 21:23:52.324334 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0511 21:23:53.324553 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0511 21:23:54.324762 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0511 21:23:55.324942 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 11 21:23:55.455: INFO: Created: latency-svc-hhp6k May 11 21:23:55.470: INFO: Got endpoints: latency-svc-hhp6k [44.941962ms] May 11 21:23:55.575: INFO: Created: latency-svc-bs8hf May 11 21:23:55.649: INFO: Got endpoints: latency-svc-bs8hf [179.408738ms] May 11 21:23:55.650: INFO: Created: latency-svc-xz8ld May 11 21:23:55.674: INFO: Got endpoints: latency-svc-xz8ld [203.505548ms] May 11 21:23:55.766: INFO: Created: latency-svc-tlztp May 11 21:23:55.781: INFO: Got endpoints: latency-svc-tlztp [311.526482ms] May 11 21:23:55.829: INFO: Created: latency-svc-h5f7p May 11 21:23:55.846: INFO: Got endpoints: latency-svc-h5f7p [376.413689ms] May 11 21:23:55.960: INFO: Created: latency-svc-vnbbj May 11 21:23:55.978: INFO: Got endpoints: latency-svc-vnbbj [508.198684ms] May 11 21:23:56.027: INFO: Created: latency-svc-gkpzt May 11 21:23:56.179: INFO: Got endpoints: latency-svc-gkpzt [709.12114ms] May 11 21:23:56.195: INFO: Created: latency-svc-lhmdz May 11 21:23:56.231: INFO: Got endpoints: latency-svc-lhmdz [760.717778ms] May 11 21:23:56.266: INFO: Created: latency-svc-m9h6n May 11 21:23:56.389: INFO: Got endpoints: latency-svc-m9h6n [919.19358ms] May 11 21:23:56.393: INFO: Created: latency-svc-8tfk5 May 11 21:23:56.401: INFO: Got endpoints: latency-svc-8tfk5 [930.924489ms] May 11 21:23:56.434: INFO: Created: latency-svc-v6kbf May 11 21:23:56.467: INFO: Got endpoints: latency-svc-v6kbf [997.000701ms] May 11 21:23:56.575: INFO: Created: latency-svc-bkmxs May 11 21:23:56.582: INFO: Got endpoints: latency-svc-bkmxs [1.111856027s] May 11 21:23:56.647: INFO: Created: latency-svc-lflh4 May 11 21:23:56.659: INFO: Got endpoints: latency-svc-lflh4 [1.189510531s] May 11 21:23:56.796: INFO: Created: latency-svc-nz2hg May 11 21:23:56.804: INFO: Got endpoints: latency-svc-nz2hg [1.33393726s] May 11 21:23:56.849: INFO: Created: latency-svc-5dhfb May 11 21:23:56.880: INFO: Got endpoints: latency-svc-5dhfb [1.409622s] May 11 21:23:56.946: INFO: Created: latency-svc-psn8l May 11 21:23:56.954: INFO: Got endpoints: latency-svc-psn8l [1.484493977s] May 11 21:23:56.991: INFO: Created: latency-svc-lw9xd May 11 21:23:57.003: INFO: Got endpoints: latency-svc-lw9xd [1.353357495s] May 11 21:23:57.027: INFO: Created: latency-svc-v4bfr May 11 21:23:57.046: INFO: Got endpoints: latency-svc-v4bfr [1.371916807s] May 11 21:23:57.102: INFO: Created: latency-svc-pp79c May 11 21:23:57.125: INFO: Got endpoints: latency-svc-pp79c [1.344044743s] May 11 21:23:57.174: INFO: Created: latency-svc-mtdpd May 11 21:23:57.200: INFO: Got endpoints: latency-svc-mtdpd [1.354169339s] May 11 21:23:57.255: INFO: Created: latency-svc-wt8nc May 11 21:23:57.273: INFO: Got endpoints: latency-svc-wt8nc [1.294896733s] May 11 21:23:57.316: INFO: Created: latency-svc-4k5tp May 11 21:23:57.389: INFO: Got endpoints: latency-svc-4k5tp [1.20969721s] May 11 21:23:57.392: INFO: Created: latency-svc-gqbgh May 11 21:23:57.405: INFO: Got endpoints: latency-svc-gqbgh [1.174839596s] May 11 21:23:57.443: INFO: Created: latency-svc-44b9z May 11 21:23:57.473: INFO: Got endpoints: latency-svc-44b9z [1.084063834s] May 11 21:23:57.557: INFO: Created: latency-svc-t4gg2 May 11 21:23:57.574: INFO: Got endpoints: latency-svc-t4gg2 [1.173114549s] May 11 21:23:57.648: INFO: Created: latency-svc-h7cj7 May 11 21:23:57.695: INFO: Got endpoints: latency-svc-h7cj7 [1.227793442s] May 11 21:23:57.707: INFO: Created: latency-svc-4kzw6 May 11 21:23:57.727: INFO: Got endpoints: latency-svc-4kzw6 [1.144854341s] May 11 21:23:57.851: INFO: Created: latency-svc-ztp75 May 11 21:23:57.857: INFO: Got endpoints: latency-svc-ztp75 [1.197279449s] May 11 21:23:57.935: INFO: Created: latency-svc-hl588 May 11 21:23:58.000: INFO: Got endpoints: latency-svc-hl588 [1.195694333s] May 11 21:23:58.011: INFO: Created: latency-svc-d4l4n May 11 21:23:58.033: INFO: Got endpoints: latency-svc-d4l4n [1.153642089s] May 11 21:23:58.059: INFO: Created: latency-svc-x8wdz May 11 21:23:58.075: INFO: Got endpoints: latency-svc-x8wdz [1.120848708s] May 11 21:23:58.180: INFO: Created: latency-svc-nb67k May 11 21:23:58.241: INFO: Got endpoints: latency-svc-nb67k [1.238785192s] May 11 21:23:58.242: INFO: Created: latency-svc-crg2s May 11 21:23:58.268: INFO: Got endpoints: latency-svc-crg2s [1.222182598s] May 11 21:23:58.341: INFO: Created: latency-svc-zjvkq May 11 21:23:58.365: INFO: Got endpoints: latency-svc-zjvkq [1.239054695s] May 11 21:23:58.389: INFO: Created: latency-svc-mz4md May 11 21:23:58.407: INFO: Got endpoints: latency-svc-mz4md [1.206098726s] May 11 21:23:58.433: INFO: Created: latency-svc-cntz2 May 11 21:23:58.497: INFO: Got endpoints: latency-svc-cntz2 [1.223919422s] May 11 21:23:58.523: INFO: Created: latency-svc-jsxr9 May 11 21:23:58.539: INFO: Got endpoints: latency-svc-jsxr9 [1.149897062s] May 11 21:23:58.563: INFO: Created: latency-svc-9wd4j May 11 21:23:58.582: INFO: Got endpoints: latency-svc-9wd4j [1.176154941s] May 11 21:23:58.652: INFO: Created: latency-svc-99nqp May 11 21:23:58.656: INFO: Got endpoints: latency-svc-99nqp [1.18218515s] May 11 21:23:58.703: INFO: Created: latency-svc-dt9dm May 11 21:23:58.731: INFO: Got endpoints: latency-svc-dt9dm [1.15689845s] May 11 21:23:58.802: INFO: Created: latency-svc-bdhxz May 11 21:23:58.805: INFO: Got endpoints: latency-svc-bdhxz [1.110565131s] May 11 21:23:58.889: INFO: Created: latency-svc-thlvd May 11 21:23:58.982: INFO: Got endpoints: latency-svc-thlvd [1.25528951s] May 11 21:23:59.016: INFO: Created: latency-svc-fxhzm May 11 21:23:59.050: INFO: Got endpoints: latency-svc-fxhzm [1.192683357s] May 11 21:23:59.205: INFO: Created: latency-svc-r7dt7 May 11 21:23:59.272: INFO: Got endpoints: latency-svc-r7dt7 [1.271779437s] May 11 21:23:59.402: INFO: Created: latency-svc-fgdrw May 11 21:23:59.422: INFO: Got endpoints: latency-svc-fgdrw [1.388759965s] May 11 21:23:59.458: INFO: Created: latency-svc-f2mvq May 11 21:23:59.483: INFO: Got endpoints: latency-svc-f2mvq [1.407257672s] May 11 21:23:59.557: INFO: Created: latency-svc-w9nrn May 11 21:23:59.578: INFO: Got endpoints: latency-svc-w9nrn [1.337007153s] May 11 21:23:59.603: INFO: Created: latency-svc-8x26g May 11 21:23:59.621: INFO: Got endpoints: latency-svc-8x26g [1.353545573s] May 11 21:23:59.644: INFO: Created: latency-svc-5snjx May 11 21:23:59.694: INFO: Got endpoints: latency-svc-5snjx [1.329673237s] May 11 21:23:59.706: INFO: Created: latency-svc-wr8fl May 11 21:23:59.724: INFO: Got endpoints: latency-svc-wr8fl [1.317560001s] May 11 21:23:59.788: INFO: Created: latency-svc-9q8gc May 11 21:23:59.904: INFO: Got endpoints: latency-svc-9q8gc [1.407034615s] May 11 21:23:59.962: INFO: Created: latency-svc-pxltg May 11 21:23:59.978: INFO: Got endpoints: latency-svc-pxltg [1.439257405s] May 11 21:24:00.060: INFO: Created: latency-svc-tk9cg May 11 21:24:00.082: INFO: Got endpoints: latency-svc-tk9cg [1.500007164s] May 11 21:24:00.127: INFO: Created: latency-svc-2lkkc May 11 21:24:00.257: INFO: Got endpoints: latency-svc-2lkkc [1.6014293s] May 11 21:24:00.276: INFO: Created: latency-svc-xvlnb May 11 21:24:00.431: INFO: Got endpoints: latency-svc-xvlnb [1.699788855s] May 11 21:24:01.000: INFO: Created: latency-svc-mjwn7 May 11 21:24:01.144: INFO: Got endpoints: latency-svc-mjwn7 [2.3381776s] May 11 21:24:01.236: INFO: Created: latency-svc-dwkwc May 11 21:24:01.527: INFO: Got endpoints: latency-svc-dwkwc [2.544541384s] May 11 21:24:01.743: INFO: Created: latency-svc-srr2v May 11 21:24:01.795: INFO: Got endpoints: latency-svc-srr2v [2.744950259s] May 11 21:24:01.898: INFO: Created: latency-svc-sbcwb May 11 21:24:01.916: INFO: Got endpoints: latency-svc-sbcwb [2.644218955s] May 11 21:24:01.974: INFO: Created: latency-svc-ddp64 May 11 21:24:01.987: INFO: Got endpoints: latency-svc-ddp64 [2.564425393s] May 11 21:24:02.044: INFO: Created: latency-svc-9tx65 May 11 21:24:02.071: INFO: Got endpoints: latency-svc-9tx65 [2.588628212s] May 11 21:24:02.123: INFO: Created: latency-svc-ggjmv May 11 21:24:02.281: INFO: Got endpoints: latency-svc-ggjmv [2.702317579s] May 11 21:24:02.587: INFO: Created: latency-svc-kv4nc May 11 21:24:02.592: INFO: Got endpoints: latency-svc-kv4nc [2.970534734s] May 11 21:24:02.790: INFO: Created: latency-svc-wrsr9 May 11 21:24:02.807: INFO: Got endpoints: latency-svc-wrsr9 [3.11255976s] May 11 21:24:02.807: INFO: Created: latency-svc-2mttr May 11 21:24:02.841: INFO: Got endpoints: latency-svc-2mttr [3.116382612s] May 11 21:24:02.869: INFO: Created: latency-svc-6xqw6 May 11 21:24:02.934: INFO: Got endpoints: latency-svc-6xqw6 [3.029858315s] May 11 21:24:02.977: INFO: Created: latency-svc-kbg6v May 11 21:24:03.002: INFO: Got endpoints: latency-svc-kbg6v [3.024084046s] May 11 21:24:03.090: INFO: Created: latency-svc-rnzdp May 11 21:24:03.115: INFO: Got endpoints: latency-svc-rnzdp [3.033458964s] May 11 21:24:03.151: INFO: Created: latency-svc-jr2bl May 11 21:24:03.341: INFO: Got endpoints: latency-svc-jr2bl [3.083545412s] May 11 21:24:03.341: INFO: Created: latency-svc-75zx2 May 11 21:24:03.343: INFO: Got endpoints: latency-svc-75zx2 [2.912228852s] May 11 21:24:03.416: INFO: Created: latency-svc-5hf7n May 11 21:24:03.557: INFO: Got endpoints: latency-svc-5hf7n [2.413403057s] May 11 21:24:03.582: INFO: Created: latency-svc-fsmq2 May 11 21:24:03.597: INFO: Got endpoints: latency-svc-fsmq2 [2.070552333s] May 11 21:24:03.690: INFO: Created: latency-svc-k8g4f May 11 21:24:03.723: INFO: Got endpoints: latency-svc-k8g4f [1.928543445s] May 11 21:24:03.921: INFO: Created: latency-svc-knfk8 May 11 21:24:03.928: INFO: Got endpoints: latency-svc-knfk8 [2.011862029s] May 11 21:24:04.042: INFO: Created: latency-svc-dblkk May 11 21:24:04.083: INFO: Got endpoints: latency-svc-dblkk [2.096730964s] May 11 21:24:04.246: INFO: Created: latency-svc-dflxt May 11 21:24:04.275: INFO: Got endpoints: latency-svc-dflxt [2.203591507s] May 11 21:24:04.468: INFO: Created: latency-svc-kdrk4 May 11 21:24:04.531: INFO: Got endpoints: latency-svc-kdrk4 [2.250140512s] May 11 21:24:04.532: INFO: Created: latency-svc-g7c7p May 11 21:24:04.564: INFO: Got endpoints: latency-svc-g7c7p [1.972412498s] May 11 21:24:04.671: INFO: Created: latency-svc-t27sm May 11 21:24:04.676: INFO: Got endpoints: latency-svc-t27sm [1.868710911s] May 11 21:24:04.820: INFO: Created: latency-svc-l254q May 11 21:24:04.824: INFO: Got endpoints: latency-svc-l254q [1.983371053s] May 11 21:24:04.862: INFO: Created: latency-svc-gsggd May 11 21:24:04.877: INFO: Got endpoints: latency-svc-gsggd [1.942752334s] May 11 21:24:04.897: INFO: Created: latency-svc-m545s May 11 21:24:04.970: INFO: Got endpoints: latency-svc-m545s [1.967455332s] May 11 21:24:05.013: INFO: Created: latency-svc-6bc6m May 11 21:24:05.031: INFO: Got endpoints: latency-svc-6bc6m [1.915609617s] May 11 21:24:05.067: INFO: Created: latency-svc-zdjm2 May 11 21:24:05.150: INFO: Got endpoints: latency-svc-zdjm2 [1.808998351s] May 11 21:24:05.173: INFO: Created: latency-svc-qswsc May 11 21:24:05.190: INFO: Got endpoints: latency-svc-qswsc [1.846702583s] May 11 21:24:05.233: INFO: Created: latency-svc-wtmrk May 11 21:24:05.306: INFO: Got endpoints: latency-svc-wtmrk [1.748592613s] May 11 21:24:05.312: INFO: Created: latency-svc-hx7pb May 11 21:24:05.329: INFO: Got endpoints: latency-svc-hx7pb [1.731098495s] May 11 21:24:05.401: INFO: Created: latency-svc-r652k May 11 21:24:05.455: INFO: Got endpoints: latency-svc-r652k [1.73214533s] May 11 21:24:05.475: INFO: Created: latency-svc-hn9dp May 11 21:24:05.492: INFO: Got endpoints: latency-svc-hn9dp [1.564414084s] May 11 21:24:05.522: INFO: Created: latency-svc-q7fmw May 11 21:24:05.538: INFO: Got endpoints: latency-svc-q7fmw [1.454444115s] May 11 21:24:05.598: INFO: Created: latency-svc-2b644 May 11 21:24:05.613: INFO: Got endpoints: latency-svc-2b644 [1.338331471s] May 11 21:24:05.650: INFO: Created: latency-svc-h7gcs May 11 21:24:05.680: INFO: Got endpoints: latency-svc-h7gcs [1.148927289s] May 11 21:24:05.754: INFO: Created: latency-svc-nmf7t May 11 21:24:05.763: INFO: Got endpoints: latency-svc-nmf7t [1.198986888s] May 11 21:24:05.793: INFO: Created: latency-svc-kw67r May 11 21:24:05.800: INFO: Got endpoints: latency-svc-kw67r [1.124505083s] May 11 21:24:05.830: INFO: Created: latency-svc-khj6x May 11 21:24:05.848: INFO: Got endpoints: latency-svc-khj6x [1.023797338s] May 11 21:24:05.904: INFO: Created: latency-svc-8t2bq May 11 21:24:05.907: INFO: Got endpoints: latency-svc-8t2bq [1.030281225s] May 11 21:24:05.943: INFO: Created: latency-svc-lh5vc May 11 21:24:05.957: INFO: Got endpoints: latency-svc-lh5vc [987.196912ms] May 11 21:24:05.998: INFO: Created: latency-svc-l9jrx May 11 21:24:06.090: INFO: Got endpoints: latency-svc-l9jrx [1.058745766s] May 11 21:24:06.104: INFO: Created: latency-svc-ktc4s May 11 21:24:06.132: INFO: Got endpoints: latency-svc-ktc4s [982.213007ms] May 11 21:24:06.311: INFO: Created: latency-svc-fwwz9 May 11 21:24:06.326: INFO: Got endpoints: latency-svc-fwwz9 [1.136337353s] May 11 21:24:06.515: INFO: Created: latency-svc-26jvx May 11 21:24:06.538: INFO: Got endpoints: latency-svc-26jvx [1.231675797s] May 11 21:24:06.742: INFO: Created: latency-svc-bjhnh May 11 21:24:06.748: INFO: Got endpoints: latency-svc-bjhnh [1.419214003s] May 11 21:24:06.922: INFO: Created: latency-svc-v5qhj May 11 21:24:06.925: INFO: Got endpoints: latency-svc-v5qhj [1.46959352s] May 11 21:24:06.977: INFO: Created: latency-svc-6l4sz May 11 21:24:07.003: INFO: Got endpoints: latency-svc-6l4sz [1.510437638s] May 11 21:24:07.119: INFO: Created: latency-svc-4qqpk May 11 21:24:07.129: INFO: Got endpoints: latency-svc-4qqpk [1.590803907s] May 11 21:24:07.588: INFO: Created: latency-svc-z5kxw May 11 21:24:07.796: INFO: Got endpoints: latency-svc-z5kxw [2.182294416s] May 11 21:24:07.799: INFO: Created: latency-svc-9ncdv May 11 21:24:08.137: INFO: Got endpoints: latency-svc-9ncdv [2.457183239s] May 11 21:24:08.420: INFO: Created: latency-svc-xww7p May 11 21:24:08.439: INFO: Got endpoints: latency-svc-xww7p [2.675622766s] May 11 21:24:08.683: INFO: Created: latency-svc-qbv4d May 11 21:24:08.746: INFO: Got endpoints: latency-svc-qbv4d [2.945966431s] May 11 21:24:08.928: INFO: Created: latency-svc-4qsjs May 11 21:24:09.015: INFO: Got endpoints: latency-svc-4qsjs [3.166726424s] May 11 21:24:09.204: INFO: Created: latency-svc-dq7qv May 11 21:24:09.713: INFO: Got endpoints: latency-svc-dq7qv [3.80589502s] May 11 21:24:10.002: INFO: Created: latency-svc-shwt9 May 11 21:24:10.324: INFO: Got endpoints: latency-svc-shwt9 [4.366301098s] May 11 21:24:10.377: INFO: Created: latency-svc-hk2gm May 11 21:24:10.557: INFO: Got endpoints: latency-svc-hk2gm [4.467085553s] May 11 21:24:10.917: INFO: Created: latency-svc-rmwqp May 11 21:24:10.924: INFO: Got endpoints: latency-svc-rmwqp [4.791675072s] May 11 21:24:11.198: INFO: Created: latency-svc-hzml5 May 11 21:24:11.290: INFO: Got endpoints: latency-svc-hzml5 [4.963852927s] May 11 21:24:11.431: INFO: Created: latency-svc-n4hst May 11 21:24:11.434: INFO: Got endpoints: latency-svc-n4hst [4.896370304s] May 11 21:24:11.737: INFO: Created: latency-svc-gg7mt May 11 21:24:11.806: INFO: Got endpoints: latency-svc-gg7mt [5.05830145s] May 11 21:24:12.251: INFO: Created: latency-svc-2vk56 May 11 21:24:12.681: INFO: Got endpoints: latency-svc-2vk56 [5.755751994s] May 11 21:24:13.280: INFO: Created: latency-svc-k7m9x May 11 21:24:13.503: INFO: Got endpoints: latency-svc-k7m9x [6.500465853s] May 11 21:24:13.598: INFO: Created: latency-svc-tw74q May 11 21:24:13.763: INFO: Got endpoints: latency-svc-tw74q [6.633649906s] May 11 21:24:13.814: INFO: Created: latency-svc-vbnnm May 11 21:24:13.844: INFO: Got endpoints: latency-svc-vbnnm [6.047970423s] May 11 21:24:13.964: INFO: Created: latency-svc-zqng9 May 11 21:24:13.968: INFO: Got endpoints: latency-svc-zqng9 [5.830440298s] May 11 21:24:14.011: INFO: Created: latency-svc-dfqbn May 11 21:24:14.025: INFO: Got endpoints: latency-svc-dfqbn [5.585624293s] May 11 21:24:14.061: INFO: Created: latency-svc-jxf4g May 11 21:24:14.138: INFO: Got endpoints: latency-svc-jxf4g [5.391304427s] May 11 21:24:14.173: INFO: Created: latency-svc-6f525 May 11 21:24:14.180: INFO: Got endpoints: latency-svc-6f525 [5.165256475s] May 11 21:24:14.227: INFO: Created: latency-svc-rhpvz May 11 21:24:14.305: INFO: Got endpoints: latency-svc-rhpvz [4.592144201s] May 11 21:24:14.319: INFO: Created: latency-svc-kvdv5 May 11 21:24:14.344: INFO: Got endpoints: latency-svc-kvdv5 [4.020099427s] May 11 21:24:14.383: INFO: Created: latency-svc-xrctf May 11 21:24:14.455: INFO: Got endpoints: latency-svc-xrctf [3.898080723s] May 11 21:24:14.457: INFO: Created: latency-svc-llrbm May 11 21:24:14.476: INFO: Got endpoints: latency-svc-llrbm [3.552490473s] May 11 21:24:14.535: INFO: Created: latency-svc-7vgz6 May 11 21:24:14.599: INFO: Got endpoints: latency-svc-7vgz6 [3.308445128s] May 11 21:24:14.619: INFO: Created: latency-svc-4n9b8 May 11 21:24:14.651: INFO: Got endpoints: latency-svc-4n9b8 [3.21673286s] May 11 21:24:14.683: INFO: Created: latency-svc-w2rpt May 11 21:24:14.803: INFO: Got endpoints: latency-svc-w2rpt [2.996874257s] May 11 21:24:14.804: INFO: Created: latency-svc-zft2h May 11 21:24:14.874: INFO: Got endpoints: latency-svc-zft2h [2.192938572s] May 11 21:24:15.331: INFO: Created: latency-svc-6zwk4 May 11 21:24:15.509: INFO: Got endpoints: latency-svc-6zwk4 [2.005526295s] May 11 21:24:15.685: INFO: Created: latency-svc-t8j7d May 11 21:24:16.012: INFO: Got endpoints: latency-svc-t8j7d [2.2498102s] May 11 21:24:16.070: INFO: Created: latency-svc-7sx2v May 11 21:24:16.420: INFO: Got endpoints: latency-svc-7sx2v [2.575750992s] May 11 21:24:16.629: INFO: Created: latency-svc-d7zbn May 11 21:24:16.658: INFO: Got endpoints: latency-svc-d7zbn [2.690308703s] May 11 21:24:16.882: INFO: Created: latency-svc-kpstg May 11 21:24:16.912: INFO: Got endpoints: latency-svc-kpstg [2.887065045s] May 11 21:24:17.010: INFO: Created: latency-svc-xjqjg May 11 21:24:17.056: INFO: Got endpoints: latency-svc-xjqjg [2.918196315s] May 11 21:24:17.498: INFO: Created: latency-svc-jcrkj May 11 21:24:17.916: INFO: Got endpoints: latency-svc-jcrkj [3.736101075s] May 11 21:24:17.948: INFO: Created: latency-svc-trz8f May 11 21:24:17.997: INFO: Got endpoints: latency-svc-trz8f [3.692087533s] May 11 21:24:18.298: INFO: Created: latency-svc-xqdbj May 11 21:24:18.315: INFO: Got endpoints: latency-svc-xqdbj [3.971237587s] May 11 21:24:18.419: INFO: Created: latency-svc-54k2d May 11 21:24:18.518: INFO: Got endpoints: latency-svc-54k2d [4.062918503s] May 11 21:24:18.519: INFO: Created: latency-svc-562gc May 11 21:24:18.586: INFO: Got endpoints: latency-svc-562gc [4.109875467s] May 11 21:24:18.602: INFO: Created: latency-svc-96zkn May 11 21:24:18.610: INFO: Got endpoints: latency-svc-96zkn [4.011034263s] May 11 21:24:18.659: INFO: Created: latency-svc-dbtq9 May 11 21:24:18.683: INFO: Got endpoints: latency-svc-dbtq9 [4.032154393s] May 11 21:24:18.772: INFO: Created: latency-svc-hz87x May 11 21:24:18.803: INFO: Got endpoints: latency-svc-hz87x [3.999948868s] May 11 21:24:18.922: INFO: Created: latency-svc-cg5mx May 11 21:24:18.953: INFO: Got endpoints: latency-svc-cg5mx [4.07919587s] May 11 21:24:19.149: INFO: Created: latency-svc-5jxlt May 11 21:24:19.169: INFO: Got endpoints: latency-svc-5jxlt [3.66065818s] May 11 21:24:19.235: INFO: Created: latency-svc-2bdtp May 11 21:24:19.413: INFO: Got endpoints: latency-svc-2bdtp [3.400754995s] May 11 21:24:19.695: INFO: Created: latency-svc-mvmkg May 11 21:24:19.702: INFO: Got endpoints: latency-svc-mvmkg [3.282740696s] May 11 21:24:20.192: INFO: Created: latency-svc-6zpkl May 11 21:24:20.196: INFO: Got endpoints: latency-svc-6zpkl [3.537918457s] May 11 21:24:21.014: INFO: Created: latency-svc-f6srp May 11 21:24:21.192: INFO: Got endpoints: latency-svc-f6srp [4.279755009s] May 11 21:24:21.408: INFO: Created: latency-svc-btlnk May 11 21:24:21.412: INFO: Got endpoints: latency-svc-btlnk [4.356016638s] May 11 21:24:21.702: INFO: Created: latency-svc-rht5z May 11 21:24:21.707: INFO: Got endpoints: latency-svc-rht5z [3.790487153s] May 11 21:24:22.668: INFO: Created: latency-svc-b2fsz May 11 21:24:23.090: INFO: Got endpoints: latency-svc-b2fsz [5.092815152s] May 11 21:24:23.155: INFO: Created: latency-svc-8p97z May 11 21:24:23.287: INFO: Got endpoints: latency-svc-8p97z [4.972105785s] May 11 21:24:23.289: INFO: Created: latency-svc-frcdq May 11 21:24:23.371: INFO: Got endpoints: latency-svc-frcdq [4.853008944s] May 11 21:24:23.647: INFO: Created: latency-svc-klnsn May 11 21:24:23.665: INFO: Got endpoints: latency-svc-klnsn [5.079073675s] May 11 21:24:24.060: INFO: Created: latency-svc-zzxpv May 11 21:24:24.252: INFO: Got endpoints: latency-svc-zzxpv [5.641751487s] May 11 21:24:24.328: INFO: Created: latency-svc-t7gpk May 11 21:24:24.522: INFO: Got endpoints: latency-svc-t7gpk [5.838730103s] May 11 21:24:24.549: INFO: Created: latency-svc-p9xz7 May 11 21:24:24.619: INFO: Got endpoints: latency-svc-p9xz7 [5.815564208s] May 11 21:24:24.956: INFO: Created: latency-svc-27pmc May 11 21:24:25.108: INFO: Got endpoints: latency-svc-27pmc [6.155129494s] May 11 21:24:25.148: INFO: Created: latency-svc-2jftn May 11 21:24:25.170: INFO: Got endpoints: latency-svc-2jftn [6.000900175s] May 11 21:24:25.389: INFO: Created: latency-svc-5pgtk May 11 21:24:25.431: INFO: Got endpoints: latency-svc-5pgtk [6.017423798s] May 11 21:24:25.431: INFO: Created: latency-svc-5lv4t May 11 21:24:25.487: INFO: Got endpoints: latency-svc-5lv4t [5.784600564s] May 11 21:24:25.587: INFO: Created: latency-svc-tg8gm May 11 21:24:25.627: INFO: Got endpoints: latency-svc-tg8gm [5.431095858s] May 11 21:24:25.683: INFO: Created: latency-svc-jq9jp May 11 21:24:25.820: INFO: Got endpoints: latency-svc-jq9jp [4.628252706s] May 11 21:24:25.870: INFO: Created: latency-svc-ps6jg May 11 21:24:25.893: INFO: Got endpoints: latency-svc-ps6jg [4.481532062s] May 11 21:24:25.919: INFO: Created: latency-svc-rhbgq May 11 21:24:25.964: INFO: Got endpoints: latency-svc-rhbgq [4.257018765s] May 11 21:24:26.001: INFO: Created: latency-svc-mfxdb May 11 21:24:26.018: INFO: Got endpoints: latency-svc-mfxdb [2.927680955s] May 11 21:24:26.043: INFO: Created: latency-svc-5r96s May 11 21:24:26.054: INFO: Got endpoints: latency-svc-5r96s [2.766822364s] May 11 21:24:26.103: INFO: Created: latency-svc-6ng7q May 11 21:24:26.121: INFO: Got endpoints: latency-svc-6ng7q [2.749792881s] May 11 21:24:26.172: INFO: Created: latency-svc-s5n5d May 11 21:24:26.181: INFO: Got endpoints: latency-svc-s5n5d [2.515621955s] May 11 21:24:26.272: INFO: Created: latency-svc-4zkrm May 11 21:24:26.290: INFO: Got endpoints: latency-svc-4zkrm [2.038149663s] May 11 21:24:26.322: INFO: Created: latency-svc-hwnd5 May 11 21:24:26.338: INFO: Got endpoints: latency-svc-hwnd5 [1.815837009s] May 11 21:24:26.414: INFO: Created: latency-svc-7bptd May 11 21:24:26.422: INFO: Got endpoints: latency-svc-7bptd [1.803538786s] May 11 21:24:26.451: INFO: Created: latency-svc-dd4m7 May 11 21:24:26.465: INFO: Got endpoints: latency-svc-dd4m7 [1.356445499s] May 11 21:24:26.486: INFO: Created: latency-svc-s67sz May 11 21:24:26.611: INFO: Got endpoints: latency-svc-s67sz [1.440378207s] May 11 21:24:26.612: INFO: Created: latency-svc-mwr7b May 11 21:24:26.663: INFO: Got endpoints: latency-svc-mwr7b [1.232573824s] May 11 21:24:26.779: INFO: Created: latency-svc-2lwd9 May 11 21:24:26.782: INFO: Got endpoints: latency-svc-2lwd9 [1.294614175s] May 11 21:24:26.849: INFO: Created: latency-svc-526c8 May 11 21:24:26.868: INFO: Got endpoints: latency-svc-526c8 [1.24053619s] May 11 21:24:26.926: INFO: Created: latency-svc-x2fg9 May 11 21:24:26.952: INFO: Got endpoints: latency-svc-x2fg9 [1.132253688s] May 11 21:24:26.991: INFO: Created: latency-svc-dbrlp May 11 21:24:27.006: INFO: Got endpoints: latency-svc-dbrlp [1.112678868s] May 11 21:24:27.084: INFO: Created: latency-svc-dpsgv May 11 21:24:27.098: INFO: Got endpoints: latency-svc-dpsgv [1.133917425s] May 11 21:24:27.170: INFO: Created: latency-svc-mnz9l May 11 21:24:27.240: INFO: Got endpoints: latency-svc-mnz9l [1.222447143s] May 11 21:24:27.244: INFO: Created: latency-svc-98qtf May 11 21:24:27.259: INFO: Got endpoints: latency-svc-98qtf [1.205267496s] May 11 21:24:27.332: INFO: Created: latency-svc-r28gz May 11 21:24:27.437: INFO: Got endpoints: latency-svc-r28gz [1.31596049s] May 11 21:24:27.454: INFO: Created: latency-svc-f7cws May 11 21:24:27.470: INFO: Got endpoints: latency-svc-f7cws [1.28892162s] May 11 21:24:27.599: INFO: Created: latency-svc-dkh8q May 11 21:24:27.638: INFO: Got endpoints: latency-svc-dkh8q [1.348437377s] May 11 21:24:27.695: INFO: Created: latency-svc-h2x98 May 11 21:24:28.051: INFO: Got endpoints: latency-svc-h2x98 [1.713471395s] May 11 21:24:28.123: INFO: Created: latency-svc-xtvcm May 11 21:24:28.300: INFO: Got endpoints: latency-svc-xtvcm [1.877629733s] May 11 21:24:28.356: INFO: Created: latency-svc-58hx9 May 11 21:24:28.397: INFO: Got endpoints: latency-svc-58hx9 [1.932499047s] May 11 21:24:28.533: INFO: Created: latency-svc-lk85c May 11 21:24:28.547: INFO: Got endpoints: latency-svc-lk85c [1.93636116s] May 11 21:24:28.627: INFO: Created: latency-svc-l9qmh May 11 21:24:28.713: INFO: Got endpoints: latency-svc-l9qmh [2.049571045s] May 11 21:24:28.764: INFO: Created: latency-svc-xw79d May 11 21:24:28.776: INFO: Got endpoints: latency-svc-xw79d [1.994274529s] May 11 21:24:28.880: INFO: Created: latency-svc-7nrvt May 11 21:24:28.884: INFO: Got endpoints: latency-svc-7nrvt [2.016379189s] May 11 21:24:28.938: INFO: Created: latency-svc-fxx6p May 11 21:24:28.956: INFO: Got endpoints: latency-svc-fxx6p [2.003711145s] May 11 21:24:29.048: INFO: Created: latency-svc-nh4h4 May 11 21:24:29.052: INFO: Got endpoints: latency-svc-nh4h4 [2.045508789s] May 11 21:24:29.080: INFO: Created: latency-svc-hrwkh May 11 21:24:29.122: INFO: Got endpoints: latency-svc-hrwkh [2.024407753s] May 11 21:24:29.198: INFO: Created: latency-svc-tnvfb May 11 21:24:29.208: INFO: Got endpoints: latency-svc-tnvfb [1.967380405s] May 11 21:24:29.208: INFO: Latencies: [179.408738ms 203.505548ms 311.526482ms 376.413689ms 508.198684ms 709.12114ms 760.717778ms 919.19358ms 930.924489ms 982.213007ms 987.196912ms 997.000701ms 1.023797338s 1.030281225s 1.058745766s 1.084063834s 1.110565131s 1.111856027s 1.112678868s 1.120848708s 1.124505083s 1.132253688s 1.133917425s 1.136337353s 1.144854341s 1.148927289s 1.149897062s 1.153642089s 1.15689845s 1.173114549s 1.174839596s 1.176154941s 1.18218515s 1.189510531s 1.192683357s 1.195694333s 1.197279449s 1.198986888s 1.205267496s 1.206098726s 1.20969721s 1.222182598s 1.222447143s 1.223919422s 1.227793442s 1.231675797s 1.232573824s 1.238785192s 1.239054695s 1.24053619s 1.25528951s 1.271779437s 1.28892162s 1.294614175s 1.294896733s 1.31596049s 1.317560001s 1.329673237s 1.33393726s 1.337007153s 1.338331471s 1.344044743s 1.348437377s 1.353357495s 1.353545573s 1.354169339s 1.356445499s 1.371916807s 1.388759965s 1.407034615s 1.407257672s 1.409622s 1.419214003s 1.439257405s 1.440378207s 1.454444115s 1.46959352s 1.484493977s 1.500007164s 1.510437638s 1.564414084s 1.590803907s 1.6014293s 1.699788855s 1.713471395s 1.731098495s 1.73214533s 1.748592613s 1.803538786s 1.808998351s 1.815837009s 1.846702583s 1.868710911s 1.877629733s 1.915609617s 1.928543445s 1.932499047s 1.93636116s 1.942752334s 1.967380405s 1.967455332s 1.972412498s 1.983371053s 1.994274529s 2.003711145s 2.005526295s 2.011862029s 2.016379189s 2.024407753s 2.038149663s 2.045508789s 2.049571045s 2.070552333s 2.096730964s 2.182294416s 2.192938572s 2.203591507s 2.2498102s 2.250140512s 2.3381776s 2.413403057s 2.457183239s 2.515621955s 2.544541384s 2.564425393s 2.575750992s 2.588628212s 2.644218955s 2.675622766s 2.690308703s 2.702317579s 2.744950259s 2.749792881s 2.766822364s 2.887065045s 2.912228852s 2.918196315s 2.927680955s 2.945966431s 2.970534734s 2.996874257s 3.024084046s 3.029858315s 3.033458964s 3.083545412s 3.11255976s 3.116382612s 3.166726424s 3.21673286s 3.282740696s 3.308445128s 3.400754995s 3.537918457s 3.552490473s 3.66065818s 3.692087533s 3.736101075s 3.790487153s 3.80589502s 3.898080723s 3.971237587s 3.999948868s 4.011034263s 4.020099427s 4.032154393s 4.062918503s 4.07919587s 4.109875467s 4.257018765s 4.279755009s 4.356016638s 4.366301098s 4.467085553s 4.481532062s 4.592144201s 4.628252706s 4.791675072s 4.853008944s 4.896370304s 4.963852927s 4.972105785s 5.05830145s 5.079073675s 5.092815152s 5.165256475s 5.391304427s 5.431095858s 5.585624293s 5.641751487s 5.755751994s 5.784600564s 5.815564208s 5.830440298s 5.838730103s 6.000900175s 6.017423798s 6.047970423s 6.155129494s 6.500465853s 6.633649906s] May 11 21:24:29.208: INFO: 50 %ile: 1.967455332s May 11 21:24:29.208: INFO: 90 %ile: 4.972105785s May 11 21:24:29.208: INFO: 99 %ile: 6.500465853s May 11 21:24:29.208: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:24:29.208: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-2734" for this suite. • [SLOW TEST:41.394 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":278,"completed":117,"skipped":2088,"failed":0} SSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:24:29.226: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-7990.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-7990.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-7990.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-7990.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-7990.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-7990.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-7990.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-7990.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-7990.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-7990.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-7990.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-7990.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7990.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 7.81.102.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.102.81.7_udp@PTR;check="$$(dig +tcp +noall +answer +search 7.81.102.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.102.81.7_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-7990.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-7990.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-7990.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-7990.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-7990.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-7990.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-7990.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-7990.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-7990.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-7990.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-7990.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-7990.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7990.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 7.81.102.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.102.81.7_udp@PTR;check="$$(dig +tcp +noall +answer +search 7.81.102.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.102.81.7_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 11 21:24:45.619: INFO: Unable to read wheezy_udp@dns-test-service.dns-7990.svc.cluster.local from pod dns-7990/dns-test-d520ae34-28f9-4d02-89bb-64399806de3e: the server could not find the requested resource (get pods dns-test-d520ae34-28f9-4d02-89bb-64399806de3e) May 11 21:24:45.864: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7990.svc.cluster.local from pod dns-7990/dns-test-d520ae34-28f9-4d02-89bb-64399806de3e: the server could not find the requested resource (get pods dns-test-d520ae34-28f9-4d02-89bb-64399806de3e) May 11 21:24:46.028: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7990.svc.cluster.local from pod dns-7990/dns-test-d520ae34-28f9-4d02-89bb-64399806de3e: the server could not find the requested resource (get pods dns-test-d520ae34-28f9-4d02-89bb-64399806de3e) May 11 21:24:46.063: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7990.svc.cluster.local from pod dns-7990/dns-test-d520ae34-28f9-4d02-89bb-64399806de3e: the server could not find the requested resource (get pods dns-test-d520ae34-28f9-4d02-89bb-64399806de3e) May 11 21:24:48.180: INFO: Unable to read jessie_udp@dns-test-service.dns-7990.svc.cluster.local from pod dns-7990/dns-test-d520ae34-28f9-4d02-89bb-64399806de3e: the server could not find the requested resource (get pods dns-test-d520ae34-28f9-4d02-89bb-64399806de3e) May 11 21:24:49.266: INFO: Unable to read jessie_tcp@dns-test-service.dns-7990.svc.cluster.local from pod dns-7990/dns-test-d520ae34-28f9-4d02-89bb-64399806de3e: the server could not find the requested resource (get pods dns-test-d520ae34-28f9-4d02-89bb-64399806de3e) May 11 21:24:49.668: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7990.svc.cluster.local from pod dns-7990/dns-test-d520ae34-28f9-4d02-89bb-64399806de3e: the server could not find the requested resource (get pods dns-test-d520ae34-28f9-4d02-89bb-64399806de3e) May 11 21:24:49.936: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7990.svc.cluster.local from pod dns-7990/dns-test-d520ae34-28f9-4d02-89bb-64399806de3e: the server could not find the requested resource (get pods dns-test-d520ae34-28f9-4d02-89bb-64399806de3e) May 11 21:24:50.407: INFO: Lookups using dns-7990/dns-test-d520ae34-28f9-4d02-89bb-64399806de3e failed for: [wheezy_udp@dns-test-service.dns-7990.svc.cluster.local wheezy_tcp@dns-test-service.dns-7990.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-7990.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7990.svc.cluster.local jessie_udp@dns-test-service.dns-7990.svc.cluster.local jessie_tcp@dns-test-service.dns-7990.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7990.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7990.svc.cluster.local] May 11 21:24:55.576: INFO: Unable to read wheezy_udp@dns-test-service.dns-7990.svc.cluster.local from pod dns-7990/dns-test-d520ae34-28f9-4d02-89bb-64399806de3e: the server could not find the requested resource (get pods dns-test-d520ae34-28f9-4d02-89bb-64399806de3e) May 11 21:24:55.732: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7990.svc.cluster.local from pod dns-7990/dns-test-d520ae34-28f9-4d02-89bb-64399806de3e: the server could not find the requested resource (get pods dns-test-d520ae34-28f9-4d02-89bb-64399806de3e) May 11 21:24:55.767: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7990.svc.cluster.local from pod dns-7990/dns-test-d520ae34-28f9-4d02-89bb-64399806de3e: the server could not find the requested resource (get pods dns-test-d520ae34-28f9-4d02-89bb-64399806de3e) May 11 21:24:55.771: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7990.svc.cluster.local from pod dns-7990/dns-test-d520ae34-28f9-4d02-89bb-64399806de3e: the server could not find the requested resource (get pods dns-test-d520ae34-28f9-4d02-89bb-64399806de3e) May 11 21:24:57.133: INFO: Unable to read jessie_udp@dns-test-service.dns-7990.svc.cluster.local from pod dns-7990/dns-test-d520ae34-28f9-4d02-89bb-64399806de3e: the server could not find the requested resource (get pods dns-test-d520ae34-28f9-4d02-89bb-64399806de3e) May 11 21:24:57.474: INFO: Unable to read jessie_tcp@dns-test-service.dns-7990.svc.cluster.local from pod dns-7990/dns-test-d520ae34-28f9-4d02-89bb-64399806de3e: the server could not find the requested resource (get pods dns-test-d520ae34-28f9-4d02-89bb-64399806de3e) May 11 21:24:57.719: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7990.svc.cluster.local from pod dns-7990/dns-test-d520ae34-28f9-4d02-89bb-64399806de3e: the server could not find the requested resource (get pods dns-test-d520ae34-28f9-4d02-89bb-64399806de3e) May 11 21:24:58.043: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7990.svc.cluster.local from pod dns-7990/dns-test-d520ae34-28f9-4d02-89bb-64399806de3e: the server could not find the requested resource (get pods dns-test-d520ae34-28f9-4d02-89bb-64399806de3e) May 11 21:24:59.558: INFO: Lookups using dns-7990/dns-test-d520ae34-28f9-4d02-89bb-64399806de3e failed for: [wheezy_udp@dns-test-service.dns-7990.svc.cluster.local wheezy_tcp@dns-test-service.dns-7990.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-7990.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7990.svc.cluster.local jessie_udp@dns-test-service.dns-7990.svc.cluster.local jessie_tcp@dns-test-service.dns-7990.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7990.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7990.svc.cluster.local] May 11 21:25:00.590: INFO: Unable to read wheezy_udp@dns-test-service.dns-7990.svc.cluster.local from pod dns-7990/dns-test-d520ae34-28f9-4d02-89bb-64399806de3e: the server could not find the requested resource (get pods dns-test-d520ae34-28f9-4d02-89bb-64399806de3e) May 11 21:25:00.593: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7990.svc.cluster.local from pod dns-7990/dns-test-d520ae34-28f9-4d02-89bb-64399806de3e: the server could not find the requested resource (get pods dns-test-d520ae34-28f9-4d02-89bb-64399806de3e) May 11 21:25:00.918: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7990.svc.cluster.local from pod dns-7990/dns-test-d520ae34-28f9-4d02-89bb-64399806de3e: the server could not find the requested resource (get pods dns-test-d520ae34-28f9-4d02-89bb-64399806de3e) May 11 21:25:01.183: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7990.svc.cluster.local from pod dns-7990/dns-test-d520ae34-28f9-4d02-89bb-64399806de3e: the server could not find the requested resource (get pods dns-test-d520ae34-28f9-4d02-89bb-64399806de3e) May 11 21:25:02.380: INFO: Unable to read jessie_udp@dns-test-service.dns-7990.svc.cluster.local from pod dns-7990/dns-test-d520ae34-28f9-4d02-89bb-64399806de3e: the server could not find the requested resource (get pods dns-test-d520ae34-28f9-4d02-89bb-64399806de3e) May 11 21:25:02.404: INFO: Unable to read jessie_tcp@dns-test-service.dns-7990.svc.cluster.local from pod dns-7990/dns-test-d520ae34-28f9-4d02-89bb-64399806de3e: the server could not find the requested resource (get pods dns-test-d520ae34-28f9-4d02-89bb-64399806de3e) May 11 21:25:02.409: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7990.svc.cluster.local from pod dns-7990/dns-test-d520ae34-28f9-4d02-89bb-64399806de3e: the server could not find the requested resource (get pods dns-test-d520ae34-28f9-4d02-89bb-64399806de3e) May 11 21:25:02.446: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7990.svc.cluster.local from pod dns-7990/dns-test-d520ae34-28f9-4d02-89bb-64399806de3e: the server could not find the requested resource (get pods dns-test-d520ae34-28f9-4d02-89bb-64399806de3e) May 11 21:25:02.774: INFO: Lookups using dns-7990/dns-test-d520ae34-28f9-4d02-89bb-64399806de3e failed for: [wheezy_udp@dns-test-service.dns-7990.svc.cluster.local wheezy_tcp@dns-test-service.dns-7990.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-7990.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7990.svc.cluster.local jessie_udp@dns-test-service.dns-7990.svc.cluster.local jessie_tcp@dns-test-service.dns-7990.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7990.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7990.svc.cluster.local] May 11 21:25:05.624: INFO: Unable to read wheezy_udp@dns-test-service.dns-7990.svc.cluster.local from pod dns-7990/dns-test-d520ae34-28f9-4d02-89bb-64399806de3e: the server could not find the requested resource (get pods dns-test-d520ae34-28f9-4d02-89bb-64399806de3e) May 11 21:25:05.627: INFO: Unable to read wheezy_tcp@dns-test-service.dns-7990.svc.cluster.local from pod dns-7990/dns-test-d520ae34-28f9-4d02-89bb-64399806de3e: the server could not find the requested resource (get pods dns-test-d520ae34-28f9-4d02-89bb-64399806de3e) May 11 21:25:05.696: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-7990.svc.cluster.local from pod dns-7990/dns-test-d520ae34-28f9-4d02-89bb-64399806de3e: the server could not find the requested resource (get pods dns-test-d520ae34-28f9-4d02-89bb-64399806de3e) May 11 21:25:05.803: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-7990.svc.cluster.local from pod dns-7990/dns-test-d520ae34-28f9-4d02-89bb-64399806de3e: the server could not find the requested resource (get pods dns-test-d520ae34-28f9-4d02-89bb-64399806de3e) May 11 21:25:06.267: INFO: Unable to read jessie_udp@dns-test-service.dns-7990.svc.cluster.local from pod dns-7990/dns-test-d520ae34-28f9-4d02-89bb-64399806de3e: the server could not find the requested resource (get pods dns-test-d520ae34-28f9-4d02-89bb-64399806de3e) May 11 21:25:06.304: INFO: Unable to read jessie_tcp@dns-test-service.dns-7990.svc.cluster.local from pod dns-7990/dns-test-d520ae34-28f9-4d02-89bb-64399806de3e: the server could not find the requested resource (get pods dns-test-d520ae34-28f9-4d02-89bb-64399806de3e) May 11 21:25:06.462: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-7990.svc.cluster.local from pod dns-7990/dns-test-d520ae34-28f9-4d02-89bb-64399806de3e: the server could not find the requested resource (get pods dns-test-d520ae34-28f9-4d02-89bb-64399806de3e) May 11 21:25:06.477: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-7990.svc.cluster.local from pod dns-7990/dns-test-d520ae34-28f9-4d02-89bb-64399806de3e: the server could not find the requested resource (get pods dns-test-d520ae34-28f9-4d02-89bb-64399806de3e) May 11 21:25:06.618: INFO: Lookups using dns-7990/dns-test-d520ae34-28f9-4d02-89bb-64399806de3e failed for: [wheezy_udp@dns-test-service.dns-7990.svc.cluster.local wheezy_tcp@dns-test-service.dns-7990.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-7990.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-7990.svc.cluster.local jessie_udp@dns-test-service.dns-7990.svc.cluster.local jessie_tcp@dns-test-service.dns-7990.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-7990.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-7990.svc.cluster.local] May 11 21:25:11.977: INFO: DNS probes using dns-7990/dns-test-d520ae34-28f9-4d02-89bb-64399806de3e succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:25:13.607: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-7990" for this suite. • [SLOW TEST:44.452 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":278,"completed":118,"skipped":2097,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:25:13.678: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-map-b5c25047-b3b3-40a3-be1c-9db561a05efb STEP: Creating a pod to test consume secrets May 11 21:25:15.865: INFO: Waiting up to 5m0s for pod "pod-secrets-c6a5fb14-ab7e-440f-b801-05ac454a9e5a" in namespace "secrets-8791" to be "success or failure" May 11 21:25:16.248: INFO: Pod "pod-secrets-c6a5fb14-ab7e-440f-b801-05ac454a9e5a": Phase="Pending", Reason="", readiness=false. Elapsed: 383.256613ms May 11 21:25:18.330: INFO: Pod "pod-secrets-c6a5fb14-ab7e-440f-b801-05ac454a9e5a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.465219889s May 11 21:25:20.708: INFO: Pod "pod-secrets-c6a5fb14-ab7e-440f-b801-05ac454a9e5a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.842898482s May 11 21:25:22.960: INFO: Pod "pod-secrets-c6a5fb14-ab7e-440f-b801-05ac454a9e5a": Phase="Pending", Reason="", readiness=false. Elapsed: 7.095326838s May 11 21:25:25.075: INFO: Pod "pod-secrets-c6a5fb14-ab7e-440f-b801-05ac454a9e5a": Phase="Running", Reason="", readiness=true. Elapsed: 9.209768777s May 11 21:25:27.733: INFO: Pod "pod-secrets-c6a5fb14-ab7e-440f-b801-05ac454a9e5a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.868154504s STEP: Saw pod success May 11 21:25:27.733: INFO: Pod "pod-secrets-c6a5fb14-ab7e-440f-b801-05ac454a9e5a" satisfied condition "success or failure" May 11 21:25:27.736: INFO: Trying to get logs from node jerma-worker pod pod-secrets-c6a5fb14-ab7e-440f-b801-05ac454a9e5a container secret-volume-test: STEP: delete the pod May 11 21:25:28.622: INFO: Waiting for pod pod-secrets-c6a5fb14-ab7e-440f-b801-05ac454a9e5a to disappear May 11 21:25:28.822: INFO: Pod pod-secrets-c6a5fb14-ab7e-440f-b801-05ac454a9e5a no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:25:28.822: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8791" for this suite. • [SLOW TEST:15.355 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":119,"skipped":2107,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:25:29.033: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 11 21:25:30.804: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:25:31.098: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-9197" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":278,"completed":120,"skipped":2124,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:25:31.612: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name secret-emptykey-test-70ff2927-da96-4e17-a656-9375ef678a50 [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:25:32.190: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-823" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":278,"completed":121,"skipped":2134,"failed":0} ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:25:32.281: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 May 11 21:25:32.632: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 11 21:25:32.775: INFO: Waiting for terminating namespaces to be deleted... May 11 21:25:32.777: INFO: Logging pods the kubelet thinks is on node jerma-worker before test May 11 21:25:32.843: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 11 21:25:32.843: INFO: Container kube-proxy ready: true, restart count 0 May 11 21:25:32.843: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 11 21:25:32.843: INFO: Container kindnet-cni ready: true, restart count 0 May 11 21:25:32.843: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test May 11 21:25:32.857: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) May 11 21:25:32.857: INFO: Container kube-bench ready: false, restart count 0 May 11 21:25:32.857: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 11 21:25:32.857: INFO: Container kindnet-cni ready: true, restart count 0 May 11 21:25:32.857: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 11 21:25:32.857: INFO: Container kube-proxy ready: true, restart count 0 May 11 21:25:32.857: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) May 11 21:25:32.857: INFO: Container kube-hunter ready: false, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-d9b0bb55-b164-431f-b188-632f68383062 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-d9b0bb55-b164-431f-b188-632f68383062 off the node jerma-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-d9b0bb55-b164-431f-b188-632f68383062 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:30:49.937: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-7011" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:317.751 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":278,"completed":122,"skipped":2134,"failed":0} SSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:30:50.032: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on tmpfs May 11 21:30:50.259: INFO: Waiting up to 5m0s for pod "pod-0c7cb491-ec86-4d2b-944e-11625b7ba94c" in namespace "emptydir-6008" to be "success or failure" May 11 21:30:50.270: INFO: Pod "pod-0c7cb491-ec86-4d2b-944e-11625b7ba94c": Phase="Pending", Reason="", readiness=false. Elapsed: 10.452197ms May 11 21:30:52.682: INFO: Pod "pod-0c7cb491-ec86-4d2b-944e-11625b7ba94c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.422849409s May 11 21:30:54.685: INFO: Pod "pod-0c7cb491-ec86-4d2b-944e-11625b7ba94c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.425833675s STEP: Saw pod success May 11 21:30:54.685: INFO: Pod "pod-0c7cb491-ec86-4d2b-944e-11625b7ba94c" satisfied condition "success or failure" May 11 21:30:54.687: INFO: Trying to get logs from node jerma-worker pod pod-0c7cb491-ec86-4d2b-944e-11625b7ba94c container test-container: STEP: delete the pod May 11 21:30:54.882: INFO: Waiting for pod pod-0c7cb491-ec86-4d2b-944e-11625b7ba94c to disappear May 11 21:30:54.886: INFO: Pod pod-0c7cb491-ec86-4d2b-944e-11625b7ba94c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:30:54.886: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6008" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":123,"skipped":2142,"failed":0} S ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:30:54.891: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation May 11 21:30:54.967: INFO: >>> kubeConfig: /root/.kube/config May 11 21:30:57.526: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:31:07.168: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2325" for this suite. • [SLOW TEST:12.284 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":278,"completed":124,"skipped":2143,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:31:07.175: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 11 21:31:07.317: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4a18ce32-b9cc-4d5f-9ddb-7425c42eb582" in namespace "downward-api-7332" to be "success or failure" May 11 21:31:07.321: INFO: Pod "downwardapi-volume-4a18ce32-b9cc-4d5f-9ddb-7425c42eb582": Phase="Pending", Reason="", readiness=false. Elapsed: 3.91311ms May 11 21:31:09.557: INFO: Pod "downwardapi-volume-4a18ce32-b9cc-4d5f-9ddb-7425c42eb582": Phase="Pending", Reason="", readiness=false. Elapsed: 2.240336603s May 11 21:31:11.585: INFO: Pod "downwardapi-volume-4a18ce32-b9cc-4d5f-9ddb-7425c42eb582": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.268226509s STEP: Saw pod success May 11 21:31:11.585: INFO: Pod "downwardapi-volume-4a18ce32-b9cc-4d5f-9ddb-7425c42eb582" satisfied condition "success or failure" May 11 21:31:11.588: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-4a18ce32-b9cc-4d5f-9ddb-7425c42eb582 container client-container: STEP: delete the pod May 11 21:31:12.000: INFO: Waiting for pod downwardapi-volume-4a18ce32-b9cc-4d5f-9ddb-7425c42eb582 to disappear May 11 21:31:12.003: INFO: Pod downwardapi-volume-4a18ce32-b9cc-4d5f-9ddb-7425c42eb582 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:31:12.003: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7332" for this suite. • [SLOW TEST:5.021 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":125,"skipped":2167,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:31:12.197: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars May 11 21:31:12.438: INFO: Waiting up to 5m0s for pod "downward-api-f5f9d860-9abe-40de-99ac-d49ac2cc084e" in namespace "downward-api-769" to be "success or failure" May 11 21:31:12.575: INFO: Pod "downward-api-f5f9d860-9abe-40de-99ac-d49ac2cc084e": Phase="Pending", Reason="", readiness=false. Elapsed: 137.546314ms May 11 21:31:14.580: INFO: Pod "downward-api-f5f9d860-9abe-40de-99ac-d49ac2cc084e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.142426212s May 11 21:31:16.583: INFO: Pod "downward-api-f5f9d860-9abe-40de-99ac-d49ac2cc084e": Phase="Running", Reason="", readiness=true. Elapsed: 4.145754434s May 11 21:31:18.586: INFO: Pod "downward-api-f5f9d860-9abe-40de-99ac-d49ac2cc084e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.148717802s STEP: Saw pod success May 11 21:31:18.586: INFO: Pod "downward-api-f5f9d860-9abe-40de-99ac-d49ac2cc084e" satisfied condition "success or failure" May 11 21:31:18.588: INFO: Trying to get logs from node jerma-worker2 pod downward-api-f5f9d860-9abe-40de-99ac-d49ac2cc084e container dapi-container: STEP: delete the pod May 11 21:31:18.629: INFO: Waiting for pod downward-api-f5f9d860-9abe-40de-99ac-d49ac2cc084e to disappear May 11 21:31:18.641: INFO: Pod downward-api-f5f9d860-9abe-40de-99ac-d49ac2cc084e no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:31:18.641: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-769" for this suite. • [SLOW TEST:6.449 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":278,"completed":126,"skipped":2210,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:31:18.647: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:31:40.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-2139" for this suite. • [SLOW TEST:22.047 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":278,"completed":127,"skipped":2236,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:31:40.694: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update May 11 21:31:40.835: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-1943 /api/v1/namespaces/watch-1943/configmaps/e2e-watch-test-resource-version 1fc00acf-7759-46fa-83c4-ef1eb7c554eb 15359370 0 2020-05-11 21:31:40 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 11 21:31:40.836: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-1943 /api/v1/namespaces/watch-1943/configmaps/e2e-watch-test-resource-version 1fc00acf-7759-46fa-83c4-ef1eb7c554eb 15359371 0 2020-05-11 21:31:40 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:31:40.836: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-1943" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":278,"completed":128,"skipped":2269,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:31:40.858: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false May 11 21:31:55.139: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5859 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 11 21:31:55.139: INFO: >>> kubeConfig: /root/.kube/config I0511 21:31:55.177758 6 log.go:172] (0xc00096e580) (0xc002a37860) Create stream I0511 21:31:55.177793 6 log.go:172] (0xc00096e580) (0xc002a37860) Stream added, broadcasting: 1 I0511 21:31:55.180197 6 log.go:172] (0xc00096e580) Reply frame received for 1 I0511 21:31:55.180230 6 log.go:172] (0xc00096e580) (0xc001f48000) Create stream I0511 21:31:55.180239 6 log.go:172] (0xc00096e580) (0xc001f48000) Stream added, broadcasting: 3 I0511 21:31:55.181229 6 log.go:172] (0xc00096e580) Reply frame received for 3 I0511 21:31:55.181283 6 log.go:172] (0xc00096e580) (0xc0017332c0) Create stream I0511 21:31:55.181289 6 log.go:172] (0xc00096e580) (0xc0017332c0) Stream added, broadcasting: 5 I0511 21:31:55.182312 6 log.go:172] (0xc00096e580) Reply frame received for 5 I0511 21:31:55.241882 6 log.go:172] (0xc00096e580) Data frame received for 5 I0511 21:31:55.241952 6 log.go:172] (0xc0017332c0) (5) Data frame handling I0511 21:31:55.241983 6 log.go:172] (0xc00096e580) Data frame received for 3 I0511 21:31:55.242011 6 log.go:172] (0xc001f48000) (3) Data frame handling I0511 21:31:55.242026 6 log.go:172] (0xc001f48000) (3) Data frame sent I0511 21:31:55.242043 6 log.go:172] (0xc00096e580) Data frame received for 3 I0511 21:31:55.242058 6 log.go:172] (0xc001f48000) (3) Data frame handling I0511 21:31:55.243421 6 log.go:172] (0xc00096e580) Data frame received for 1 I0511 21:31:55.243444 6 log.go:172] (0xc002a37860) (1) Data frame handling I0511 21:31:55.243458 6 log.go:172] (0xc002a37860) (1) Data frame sent I0511 21:31:55.243469 6 log.go:172] (0xc00096e580) (0xc002a37860) Stream removed, broadcasting: 1 I0511 21:31:55.243492 6 log.go:172] (0xc00096e580) Go away received I0511 21:31:55.243585 6 log.go:172] (0xc00096e580) (0xc002a37860) Stream removed, broadcasting: 1 I0511 21:31:55.243602 6 log.go:172] (0xc00096e580) (0xc001f48000) Stream removed, broadcasting: 3 I0511 21:31:55.243609 6 log.go:172] (0xc00096e580) (0xc0017332c0) Stream removed, broadcasting: 5 May 11 21:31:55.243: INFO: Exec stderr: "" May 11 21:31:55.243: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5859 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 11 21:31:55.243: INFO: >>> kubeConfig: /root/.kube/config I0511 21:31:55.268795 6 log.go:172] (0xc00096ebb0) (0xc002a37a40) Create stream I0511 21:31:55.268833 6 log.go:172] (0xc00096ebb0) (0xc002a37a40) Stream added, broadcasting: 1 I0511 21:31:55.271686 6 log.go:172] (0xc00096ebb0) Reply frame received for 1 I0511 21:31:55.271722 6 log.go:172] (0xc00096ebb0) (0xc002a37ae0) Create stream I0511 21:31:55.271734 6 log.go:172] (0xc00096ebb0) (0xc002a37ae0) Stream added, broadcasting: 3 I0511 21:31:55.272576 6 log.go:172] (0xc00096ebb0) Reply frame received for 3 I0511 21:31:55.272608 6 log.go:172] (0xc00096ebb0) (0xc002a37b80) Create stream I0511 21:31:55.272619 6 log.go:172] (0xc00096ebb0) (0xc002a37b80) Stream added, broadcasting: 5 I0511 21:31:55.273858 6 log.go:172] (0xc00096ebb0) Reply frame received for 5 I0511 21:31:55.334208 6 log.go:172] (0xc00096ebb0) Data frame received for 5 I0511 21:31:55.334247 6 log.go:172] (0xc002a37b80) (5) Data frame handling I0511 21:31:55.334284 6 log.go:172] (0xc00096ebb0) Data frame received for 3 I0511 21:31:55.334307 6 log.go:172] (0xc002a37ae0) (3) Data frame handling I0511 21:31:55.334330 6 log.go:172] (0xc002a37ae0) (3) Data frame sent I0511 21:31:55.334346 6 log.go:172] (0xc00096ebb0) Data frame received for 3 I0511 21:31:55.334357 6 log.go:172] (0xc002a37ae0) (3) Data frame handling I0511 21:31:55.335323 6 log.go:172] (0xc00096ebb0) Data frame received for 1 I0511 21:31:55.335342 6 log.go:172] (0xc002a37a40) (1) Data frame handling I0511 21:31:55.335356 6 log.go:172] (0xc002a37a40) (1) Data frame sent I0511 21:31:55.335369 6 log.go:172] (0xc00096ebb0) (0xc002a37a40) Stream removed, broadcasting: 1 I0511 21:31:55.335384 6 log.go:172] (0xc00096ebb0) Go away received I0511 21:31:55.335489 6 log.go:172] (0xc00096ebb0) (0xc002a37a40) Stream removed, broadcasting: 1 I0511 21:31:55.335518 6 log.go:172] (0xc00096ebb0) (0xc002a37ae0) Stream removed, broadcasting: 3 I0511 21:31:55.335533 6 log.go:172] (0xc00096ebb0) (0xc002a37b80) Stream removed, broadcasting: 5 May 11 21:31:55.335: INFO: Exec stderr: "" May 11 21:31:55.335: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5859 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 11 21:31:55.335: INFO: >>> kubeConfig: /root/.kube/config I0511 21:31:55.357713 6 log.go:172] (0xc00096f1e0) (0xc002a37e00) Create stream I0511 21:31:55.357742 6 log.go:172] (0xc00096f1e0) (0xc002a37e00) Stream added, broadcasting: 1 I0511 21:31:55.359979 6 log.go:172] (0xc00096f1e0) Reply frame received for 1 I0511 21:31:55.360031 6 log.go:172] (0xc00096f1e0) (0xc001f08aa0) Create stream I0511 21:31:55.360054 6 log.go:172] (0xc00096f1e0) (0xc001f08aa0) Stream added, broadcasting: 3 I0511 21:31:55.365476 6 log.go:172] (0xc00096f1e0) Reply frame received for 3 I0511 21:31:55.365538 6 log.go:172] (0xc00096f1e0) (0xc001f480a0) Create stream I0511 21:31:55.365572 6 log.go:172] (0xc00096f1e0) (0xc001f480a0) Stream added, broadcasting: 5 I0511 21:31:55.367215 6 log.go:172] (0xc00096f1e0) Reply frame received for 5 I0511 21:31:55.435295 6 log.go:172] (0xc00096f1e0) Data frame received for 5 I0511 21:31:55.435355 6 log.go:172] (0xc001f480a0) (5) Data frame handling I0511 21:31:55.435387 6 log.go:172] (0xc00096f1e0) Data frame received for 3 I0511 21:31:55.435406 6 log.go:172] (0xc001f08aa0) (3) Data frame handling I0511 21:31:55.435429 6 log.go:172] (0xc001f08aa0) (3) Data frame sent I0511 21:31:55.435446 6 log.go:172] (0xc00096f1e0) Data frame received for 3 I0511 21:31:55.435462 6 log.go:172] (0xc001f08aa0) (3) Data frame handling I0511 21:31:55.437370 6 log.go:172] (0xc00096f1e0) Data frame received for 1 I0511 21:31:55.437406 6 log.go:172] (0xc002a37e00) (1) Data frame handling I0511 21:31:55.437447 6 log.go:172] (0xc002a37e00) (1) Data frame sent I0511 21:31:55.437474 6 log.go:172] (0xc00096f1e0) (0xc002a37e00) Stream removed, broadcasting: 1 I0511 21:31:55.437498 6 log.go:172] (0xc00096f1e0) Go away received I0511 21:31:55.437608 6 log.go:172] (0xc00096f1e0) (0xc002a37e00) Stream removed, broadcasting: 1 I0511 21:31:55.437685 6 log.go:172] (0xc00096f1e0) (0xc001f08aa0) Stream removed, broadcasting: 3 I0511 21:31:55.437761 6 log.go:172] (0xc00096f1e0) (0xc001f480a0) Stream removed, broadcasting: 5 May 11 21:31:55.437: INFO: Exec stderr: "" May 11 21:31:55.437: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5859 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 11 21:31:55.437: INFO: >>> kubeConfig: /root/.kube/config I0511 21:31:55.515024 6 log.go:172] (0xc0008edd90) (0xc001f08e60) Create stream I0511 21:31:55.515055 6 log.go:172] (0xc0008edd90) (0xc001f08e60) Stream added, broadcasting: 1 I0511 21:31:55.517684 6 log.go:172] (0xc0008edd90) Reply frame received for 1 I0511 21:31:55.517730 6 log.go:172] (0xc0008edd90) (0xc001f08f00) Create stream I0511 21:31:55.517747 6 log.go:172] (0xc0008edd90) (0xc001f08f00) Stream added, broadcasting: 3 I0511 21:31:55.518997 6 log.go:172] (0xc0008edd90) Reply frame received for 3 I0511 21:31:55.519037 6 log.go:172] (0xc0008edd90) (0xc002a37ea0) Create stream I0511 21:31:55.519053 6 log.go:172] (0xc0008edd90) (0xc002a37ea0) Stream added, broadcasting: 5 I0511 21:31:55.520025 6 log.go:172] (0xc0008edd90) Reply frame received for 5 I0511 21:31:55.593260 6 log.go:172] (0xc0008edd90) Data frame received for 5 I0511 21:31:55.593296 6 log.go:172] (0xc002a37ea0) (5) Data frame handling I0511 21:31:55.593315 6 log.go:172] (0xc0008edd90) Data frame received for 3 I0511 21:31:55.593321 6 log.go:172] (0xc001f08f00) (3) Data frame handling I0511 21:31:55.593329 6 log.go:172] (0xc001f08f00) (3) Data frame sent I0511 21:31:55.593335 6 log.go:172] (0xc0008edd90) Data frame received for 3 I0511 21:31:55.593341 6 log.go:172] (0xc001f08f00) (3) Data frame handling I0511 21:31:55.594760 6 log.go:172] (0xc0008edd90) Data frame received for 1 I0511 21:31:55.594827 6 log.go:172] (0xc001f08e60) (1) Data frame handling I0511 21:31:55.594861 6 log.go:172] (0xc001f08e60) (1) Data frame sent I0511 21:31:55.594884 6 log.go:172] (0xc0008edd90) (0xc001f08e60) Stream removed, broadcasting: 1 I0511 21:31:55.594905 6 log.go:172] (0xc0008edd90) Go away received I0511 21:31:55.595072 6 log.go:172] (0xc0008edd90) (0xc001f08e60) Stream removed, broadcasting: 1 I0511 21:31:55.595097 6 log.go:172] (0xc0008edd90) (0xc001f08f00) Stream removed, broadcasting: 3 I0511 21:31:55.595108 6 log.go:172] (0xc0008edd90) (0xc002a37ea0) Stream removed, broadcasting: 5 May 11 21:31:55.595: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount May 11 21:31:55.595: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5859 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 11 21:31:55.595: INFO: >>> kubeConfig: /root/.kube/config I0511 21:31:55.643121 6 log.go:172] (0xc002414370) (0xc001f48320) Create stream I0511 21:31:55.643156 6 log.go:172] (0xc002414370) (0xc001f48320) Stream added, broadcasting: 1 I0511 21:31:55.645782 6 log.go:172] (0xc002414370) Reply frame received for 1 I0511 21:31:55.645823 6 log.go:172] (0xc002414370) (0xc002a37f40) Create stream I0511 21:31:55.645835 6 log.go:172] (0xc002414370) (0xc002a37f40) Stream added, broadcasting: 3 I0511 21:31:55.647182 6 log.go:172] (0xc002414370) Reply frame received for 3 I0511 21:31:55.647212 6 log.go:172] (0xc002414370) (0xc001404140) Create stream I0511 21:31:55.647220 6 log.go:172] (0xc002414370) (0xc001404140) Stream added, broadcasting: 5 I0511 21:31:55.648244 6 log.go:172] (0xc002414370) Reply frame received for 5 I0511 21:31:55.712394 6 log.go:172] (0xc002414370) Data frame received for 5 I0511 21:31:55.712446 6 log.go:172] (0xc001404140) (5) Data frame handling I0511 21:31:55.712469 6 log.go:172] (0xc002414370) Data frame received for 3 I0511 21:31:55.712480 6 log.go:172] (0xc002a37f40) (3) Data frame handling I0511 21:31:55.712490 6 log.go:172] (0xc002a37f40) (3) Data frame sent I0511 21:31:55.712501 6 log.go:172] (0xc002414370) Data frame received for 3 I0511 21:31:55.712515 6 log.go:172] (0xc002a37f40) (3) Data frame handling I0511 21:31:55.714087 6 log.go:172] (0xc002414370) Data frame received for 1 I0511 21:31:55.714120 6 log.go:172] (0xc001f48320) (1) Data frame handling I0511 21:31:55.714138 6 log.go:172] (0xc001f48320) (1) Data frame sent I0511 21:31:55.714159 6 log.go:172] (0xc002414370) (0xc001f48320) Stream removed, broadcasting: 1 I0511 21:31:55.714183 6 log.go:172] (0xc002414370) Go away received I0511 21:31:55.714308 6 log.go:172] (0xc002414370) (0xc001f48320) Stream removed, broadcasting: 1 I0511 21:31:55.714338 6 log.go:172] (0xc002414370) (0xc002a37f40) Stream removed, broadcasting: 3 I0511 21:31:55.714360 6 log.go:172] (0xc002414370) (0xc001404140) Stream removed, broadcasting: 5 May 11 21:31:55.714: INFO: Exec stderr: "" May 11 21:31:55.714: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5859 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 11 21:31:55.714: INFO: >>> kubeConfig: /root/.kube/config I0511 21:31:55.739427 6 log.go:172] (0xc0024149a0) (0xc001f485a0) Create stream I0511 21:31:55.739454 6 log.go:172] (0xc0024149a0) (0xc001f485a0) Stream added, broadcasting: 1 I0511 21:31:55.742103 6 log.go:172] (0xc0024149a0) Reply frame received for 1 I0511 21:31:55.742146 6 log.go:172] (0xc0024149a0) (0xc001f48640) Create stream I0511 21:31:55.742163 6 log.go:172] (0xc0024149a0) (0xc001f48640) Stream added, broadcasting: 3 I0511 21:31:55.743125 6 log.go:172] (0xc0024149a0) Reply frame received for 3 I0511 21:31:55.743164 6 log.go:172] (0xc0024149a0) (0xc001f08fa0) Create stream I0511 21:31:55.743178 6 log.go:172] (0xc0024149a0) (0xc001f08fa0) Stream added, broadcasting: 5 I0511 21:31:55.744207 6 log.go:172] (0xc0024149a0) Reply frame received for 5 I0511 21:31:55.797101 6 log.go:172] (0xc0024149a0) Data frame received for 5 I0511 21:31:55.797284 6 log.go:172] (0xc001f08fa0) (5) Data frame handling I0511 21:31:55.797316 6 log.go:172] (0xc0024149a0) Data frame received for 3 I0511 21:31:55.797326 6 log.go:172] (0xc001f48640) (3) Data frame handling I0511 21:31:55.797336 6 log.go:172] (0xc001f48640) (3) Data frame sent I0511 21:31:55.797343 6 log.go:172] (0xc0024149a0) Data frame received for 3 I0511 21:31:55.797348 6 log.go:172] (0xc001f48640) (3) Data frame handling I0511 21:31:55.799116 6 log.go:172] (0xc0024149a0) Data frame received for 1 I0511 21:31:55.799140 6 log.go:172] (0xc001f485a0) (1) Data frame handling I0511 21:31:55.799176 6 log.go:172] (0xc001f485a0) (1) Data frame sent I0511 21:31:55.799196 6 log.go:172] (0xc0024149a0) (0xc001f485a0) Stream removed, broadcasting: 1 I0511 21:31:55.799319 6 log.go:172] (0xc0024149a0) (0xc001f485a0) Stream removed, broadcasting: 1 I0511 21:31:55.799335 6 log.go:172] (0xc0024149a0) (0xc001f48640) Stream removed, broadcasting: 3 I0511 21:31:55.799416 6 log.go:172] (0xc0024149a0) Go away received I0511 21:31:55.799544 6 log.go:172] (0xc0024149a0) (0xc001f08fa0) Stream removed, broadcasting: 5 May 11 21:31:55.799: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true May 11 21:31:55.799: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5859 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 11 21:31:55.799: INFO: >>> kubeConfig: /root/.kube/config I0511 21:31:55.837474 6 log.go:172] (0xc002414fd0) (0xc001f488c0) Create stream I0511 21:31:55.837514 6 log.go:172] (0xc002414fd0) (0xc001f488c0) Stream added, broadcasting: 1 I0511 21:31:55.840048 6 log.go:172] (0xc002414fd0) Reply frame received for 1 I0511 21:31:55.840085 6 log.go:172] (0xc002414fd0) (0xc001f48960) Create stream I0511 21:31:55.840096 6 log.go:172] (0xc002414fd0) (0xc001f48960) Stream added, broadcasting: 3 I0511 21:31:55.841493 6 log.go:172] (0xc002414fd0) Reply frame received for 3 I0511 21:31:55.841551 6 log.go:172] (0xc002414fd0) (0xc001733360) Create stream I0511 21:31:55.841583 6 log.go:172] (0xc002414fd0) (0xc001733360) Stream added, broadcasting: 5 I0511 21:31:55.843074 6 log.go:172] (0xc002414fd0) Reply frame received for 5 I0511 21:31:55.901458 6 log.go:172] (0xc002414fd0) Data frame received for 3 I0511 21:31:55.901486 6 log.go:172] (0xc001f48960) (3) Data frame handling I0511 21:31:55.901519 6 log.go:172] (0xc001f48960) (3) Data frame sent I0511 21:31:55.901691 6 log.go:172] (0xc002414fd0) Data frame received for 3 I0511 21:31:55.901722 6 log.go:172] (0xc001f48960) (3) Data frame handling I0511 21:31:55.901751 6 log.go:172] (0xc002414fd0) Data frame received for 5 I0511 21:31:55.901767 6 log.go:172] (0xc001733360) (5) Data frame handling I0511 21:31:55.903387 6 log.go:172] (0xc002414fd0) Data frame received for 1 I0511 21:31:55.903411 6 log.go:172] (0xc001f488c0) (1) Data frame handling I0511 21:31:55.903433 6 log.go:172] (0xc001f488c0) (1) Data frame sent I0511 21:31:55.903445 6 log.go:172] (0xc002414fd0) (0xc001f488c0) Stream removed, broadcasting: 1 I0511 21:31:55.903533 6 log.go:172] (0xc002414fd0) Go away received I0511 21:31:55.903572 6 log.go:172] (0xc002414fd0) (0xc001f488c0) Stream removed, broadcasting: 1 I0511 21:31:55.903592 6 log.go:172] (0xc002414fd0) (0xc001f48960) Stream removed, broadcasting: 3 I0511 21:31:55.903603 6 log.go:172] (0xc002414fd0) (0xc001733360) Stream removed, broadcasting: 5 May 11 21:31:55.903: INFO: Exec stderr: "" May 11 21:31:55.903: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5859 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 11 21:31:55.903: INFO: >>> kubeConfig: /root/.kube/config I0511 21:31:55.933957 6 log.go:172] (0xc000bee790) (0xc0017339a0) Create stream I0511 21:31:55.933994 6 log.go:172] (0xc000bee790) (0xc0017339a0) Stream added, broadcasting: 1 I0511 21:31:55.939230 6 log.go:172] (0xc000bee790) Reply frame received for 1 I0511 21:31:55.939281 6 log.go:172] (0xc000bee790) (0xc002480000) Create stream I0511 21:31:55.939293 6 log.go:172] (0xc000bee790) (0xc002480000) Stream added, broadcasting: 3 I0511 21:31:55.940166 6 log.go:172] (0xc000bee790) Reply frame received for 3 I0511 21:31:55.940194 6 log.go:172] (0xc000bee790) (0xc001f090e0) Create stream I0511 21:31:55.940205 6 log.go:172] (0xc000bee790) (0xc001f090e0) Stream added, broadcasting: 5 I0511 21:31:55.941004 6 log.go:172] (0xc000bee790) Reply frame received for 5 I0511 21:31:56.002234 6 log.go:172] (0xc000bee790) Data frame received for 5 I0511 21:31:56.002306 6 log.go:172] (0xc000bee790) Data frame received for 3 I0511 21:31:56.002372 6 log.go:172] (0xc002480000) (3) Data frame handling I0511 21:31:56.002417 6 log.go:172] (0xc002480000) (3) Data frame sent I0511 21:31:56.002444 6 log.go:172] (0xc000bee790) Data frame received for 3 I0511 21:31:56.002464 6 log.go:172] (0xc002480000) (3) Data frame handling I0511 21:31:56.002487 6 log.go:172] (0xc001f090e0) (5) Data frame handling I0511 21:31:56.004108 6 log.go:172] (0xc000bee790) Data frame received for 1 I0511 21:31:56.004136 6 log.go:172] (0xc0017339a0) (1) Data frame handling I0511 21:31:56.004156 6 log.go:172] (0xc0017339a0) (1) Data frame sent I0511 21:31:56.004289 6 log.go:172] (0xc000bee790) (0xc0017339a0) Stream removed, broadcasting: 1 I0511 21:31:56.004411 6 log.go:172] (0xc000bee790) (0xc0017339a0) Stream removed, broadcasting: 1 I0511 21:31:56.004432 6 log.go:172] (0xc000bee790) (0xc002480000) Stream removed, broadcasting: 3 I0511 21:31:56.004443 6 log.go:172] (0xc000bee790) (0xc001f090e0) Stream removed, broadcasting: 5 May 11 21:31:56.004: INFO: Exec stderr: "" May 11 21:31:56.004: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5859 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 11 21:31:56.004: INFO: >>> kubeConfig: /root/.kube/config I0511 21:31:56.006756 6 log.go:172] (0xc000bee790) Go away received I0511 21:31:56.039357 6 log.go:172] (0xc00096f550) (0xc002480140) Create stream I0511 21:31:56.039392 6 log.go:172] (0xc00096f550) (0xc002480140) Stream added, broadcasting: 1 I0511 21:31:56.041874 6 log.go:172] (0xc00096f550) Reply frame received for 1 I0511 21:31:56.041932 6 log.go:172] (0xc00096f550) (0xc0024801e0) Create stream I0511 21:31:56.041948 6 log.go:172] (0xc00096f550) (0xc0024801e0) Stream added, broadcasting: 3 I0511 21:31:56.042846 6 log.go:172] (0xc00096f550) Reply frame received for 3 I0511 21:31:56.042884 6 log.go:172] (0xc00096f550) (0xc001733b80) Create stream I0511 21:31:56.042898 6 log.go:172] (0xc00096f550) (0xc001733b80) Stream added, broadcasting: 5 I0511 21:31:56.043750 6 log.go:172] (0xc00096f550) Reply frame received for 5 I0511 21:31:56.134808 6 log.go:172] (0xc00096f550) Data frame received for 5 I0511 21:31:56.134842 6 log.go:172] (0xc001733b80) (5) Data frame handling I0511 21:31:56.134864 6 log.go:172] (0xc00096f550) Data frame received for 3 I0511 21:31:56.134876 6 log.go:172] (0xc0024801e0) (3) Data frame handling I0511 21:31:56.134893 6 log.go:172] (0xc0024801e0) (3) Data frame sent I0511 21:31:56.134904 6 log.go:172] (0xc00096f550) Data frame received for 3 I0511 21:31:56.134914 6 log.go:172] (0xc0024801e0) (3) Data frame handling I0511 21:31:56.136097 6 log.go:172] (0xc00096f550) Data frame received for 1 I0511 21:31:56.136126 6 log.go:172] (0xc002480140) (1) Data frame handling I0511 21:31:56.136147 6 log.go:172] (0xc002480140) (1) Data frame sent I0511 21:31:56.136169 6 log.go:172] (0xc00096f550) (0xc002480140) Stream removed, broadcasting: 1 I0511 21:31:56.136198 6 log.go:172] (0xc00096f550) Go away received I0511 21:31:56.136276 6 log.go:172] (0xc00096f550) (0xc002480140) Stream removed, broadcasting: 1 I0511 21:31:56.136302 6 log.go:172] (0xc00096f550) (0xc0024801e0) Stream removed, broadcasting: 3 I0511 21:31:56.136322 6 log.go:172] (0xc00096f550) (0xc001733b80) Stream removed, broadcasting: 5 May 11 21:31:56.136: INFO: Exec stderr: "" May 11 21:31:56.136: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5859 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 11 21:31:56.136: INFO: >>> kubeConfig: /root/.kube/config I0511 21:31:56.169728 6 log.go:172] (0xc0023e2420) (0xc001f092c0) Create stream I0511 21:31:56.169760 6 log.go:172] (0xc0023e2420) (0xc001f092c0) Stream added, broadcasting: 1 I0511 21:31:56.172394 6 log.go:172] (0xc0023e2420) Reply frame received for 1 I0511 21:31:56.172426 6 log.go:172] (0xc0023e2420) (0xc001f09360) Create stream I0511 21:31:56.172437 6 log.go:172] (0xc0023e2420) (0xc001f09360) Stream added, broadcasting: 3 I0511 21:31:56.173383 6 log.go:172] (0xc0023e2420) Reply frame received for 3 I0511 21:31:56.173431 6 log.go:172] (0xc0023e2420) (0xc002480280) Create stream I0511 21:31:56.173457 6 log.go:172] (0xc0023e2420) (0xc002480280) Stream added, broadcasting: 5 I0511 21:31:56.174348 6 log.go:172] (0xc0023e2420) Reply frame received for 5 I0511 21:31:56.240995 6 log.go:172] (0xc0023e2420) Data frame received for 3 I0511 21:31:56.241044 6 log.go:172] (0xc001f09360) (3) Data frame handling I0511 21:31:56.241062 6 log.go:172] (0xc001f09360) (3) Data frame sent I0511 21:31:56.241077 6 log.go:172] (0xc0023e2420) Data frame received for 3 I0511 21:31:56.241305 6 log.go:172] (0xc001f09360) (3) Data frame handling I0511 21:31:56.241353 6 log.go:172] (0xc0023e2420) Data frame received for 5 I0511 21:31:56.241382 6 log.go:172] (0xc002480280) (5) Data frame handling I0511 21:31:56.242767 6 log.go:172] (0xc0023e2420) Data frame received for 1 I0511 21:31:56.242801 6 log.go:172] (0xc001f092c0) (1) Data frame handling I0511 21:31:56.242822 6 log.go:172] (0xc001f092c0) (1) Data frame sent I0511 21:31:56.242835 6 log.go:172] (0xc0023e2420) (0xc001f092c0) Stream removed, broadcasting: 1 I0511 21:31:56.242929 6 log.go:172] (0xc0023e2420) (0xc001f092c0) Stream removed, broadcasting: 1 I0511 21:31:56.242958 6 log.go:172] (0xc0023e2420) (0xc001f09360) Stream removed, broadcasting: 3 I0511 21:31:56.242967 6 log.go:172] (0xc0023e2420) (0xc002480280) Stream removed, broadcasting: 5 May 11 21:31:56.242: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:31:56.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready I0511 21:31:56.243065 6 log.go:172] (0xc0023e2420) Go away received STEP: Destroying namespace "e2e-kubelet-etc-hosts-5859" for this suite. • [SLOW TEST:15.393 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":129,"skipped":2286,"failed":0} [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:31:56.251: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: set up a multi version CRD May 11 21:31:56.299: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:32:12.049: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-8924" for this suite. • [SLOW TEST:15.803 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":278,"completed":130,"skipped":2286,"failed":0} SS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:32:12.055: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0511 21:32:13.274855 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 11 21:32:13.274: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:32:13.274: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7704" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":278,"completed":131,"skipped":2288,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:32:13.281: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap configmap-2564/configmap-test-e1f42796-eabb-4cbc-ab25-9404c0e1aca7 STEP: Creating a pod to test consume configMaps May 11 21:32:13.658: INFO: Waiting up to 5m0s for pod "pod-configmaps-5c8e503f-ac4c-41af-971f-0feb447ff396" in namespace "configmap-2564" to be "success or failure" May 11 21:32:13.774: INFO: Pod "pod-configmaps-5c8e503f-ac4c-41af-971f-0feb447ff396": Phase="Pending", Reason="", readiness=false. Elapsed: 115.397031ms May 11 21:32:15.778: INFO: Pod "pod-configmaps-5c8e503f-ac4c-41af-971f-0feb447ff396": Phase="Pending", Reason="", readiness=false. Elapsed: 2.119722723s May 11 21:32:17.782: INFO: Pod "pod-configmaps-5c8e503f-ac4c-41af-971f-0feb447ff396": Phase="Pending", Reason="", readiness=false. Elapsed: 4.123656538s May 11 21:32:19.785: INFO: Pod "pod-configmaps-5c8e503f-ac4c-41af-971f-0feb447ff396": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.126433321s STEP: Saw pod success May 11 21:32:19.785: INFO: Pod "pod-configmaps-5c8e503f-ac4c-41af-971f-0feb447ff396" satisfied condition "success or failure" May 11 21:32:19.787: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-5c8e503f-ac4c-41af-971f-0feb447ff396 container env-test: STEP: delete the pod May 11 21:32:19.882: INFO: Waiting for pod pod-configmaps-5c8e503f-ac4c-41af-971f-0feb447ff396 to disappear May 11 21:32:19.922: INFO: Pod pod-configmaps-5c8e503f-ac4c-41af-971f-0feb447ff396 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:32:19.922: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2564" for this suite. • [SLOW TEST:6.648 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":278,"completed":132,"skipped":2302,"failed":0} SS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:32:19.929: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 11 21:32:20.935: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 11 21:32:23.139: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724829540, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724829540, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724829541, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724829540, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 21:32:25.152: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724829540, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724829540, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724829541, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724829540, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 11 21:32:28.232: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook May 11 21:32:32.282: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config attach --namespace=webhook-1857 to-be-attached-pod -i -c=container1' May 11 21:32:32.402: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:32:32.407: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1857" for this suite. STEP: Destroying namespace "webhook-1857-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:12.567 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":278,"completed":133,"skipped":2304,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:32:32.497: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1784.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1784.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 11 21:32:44.798: INFO: DNS probes using dns-1784/dns-test-6ef04265-6c4c-46a6-adf3-a1975149b7af succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:32:44.826: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1784" for this suite. • [SLOW TEST:12.354 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":278,"completed":134,"skipped":2341,"failed":0} SSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:32:44.851: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready May 11 21:32:45.442: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set May 11 21:32:47.452: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724829565, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724829565, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724829565, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724829565, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 11 21:32:50.534: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 11 21:32:50.536: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:32:52.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-4888" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136 • [SLOW TEST:8.897 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":278,"completed":135,"skipped":2344,"failed":0} SSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:32:53.748: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on node default medium May 11 21:32:54.238: INFO: Waiting up to 5m0s for pod "pod-40b63b03-4ce0-44a7-b383-9c12845aef04" in namespace "emptydir-3829" to be "success or failure" May 11 21:32:54.253: INFO: Pod "pod-40b63b03-4ce0-44a7-b383-9c12845aef04": Phase="Pending", Reason="", readiness=false. Elapsed: 14.970912ms May 11 21:32:56.403: INFO: Pod "pod-40b63b03-4ce0-44a7-b383-9c12845aef04": Phase="Pending", Reason="", readiness=false. Elapsed: 2.164516001s May 11 21:32:58.498: INFO: Pod "pod-40b63b03-4ce0-44a7-b383-9c12845aef04": Phase="Pending", Reason="", readiness=false. Elapsed: 4.260013029s May 11 21:33:00.503: INFO: Pod "pod-40b63b03-4ce0-44a7-b383-9c12845aef04": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.264417419s STEP: Saw pod success May 11 21:33:00.503: INFO: Pod "pod-40b63b03-4ce0-44a7-b383-9c12845aef04" satisfied condition "success or failure" May 11 21:33:00.507: INFO: Trying to get logs from node jerma-worker pod pod-40b63b03-4ce0-44a7-b383-9c12845aef04 container test-container: STEP: delete the pod May 11 21:33:00.555: INFO: Waiting for pod pod-40b63b03-4ce0-44a7-b383-9c12845aef04 to disappear May 11 21:33:00.558: INFO: Pod pod-40b63b03-4ce0-44a7-b383-9c12845aef04 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:33:00.558: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3829" for this suite. • [SLOW TEST:6.817 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":136,"skipped":2348,"failed":0} SS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:33:00.565: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on tmpfs May 11 21:33:00.707: INFO: Waiting up to 5m0s for pod "pod-00e9255c-19dd-451d-a39d-1c88bb308549" in namespace "emptydir-5694" to be "success or failure" May 11 21:33:00.710: INFO: Pod "pod-00e9255c-19dd-451d-a39d-1c88bb308549": Phase="Pending", Reason="", readiness=false. Elapsed: 3.50321ms May 11 21:33:02.714: INFO: Pod "pod-00e9255c-19dd-451d-a39d-1c88bb308549": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007652937s May 11 21:33:04.718: INFO: Pod "pod-00e9255c-19dd-451d-a39d-1c88bb308549": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011238845s STEP: Saw pod success May 11 21:33:04.718: INFO: Pod "pod-00e9255c-19dd-451d-a39d-1c88bb308549" satisfied condition "success or failure" May 11 21:33:04.721: INFO: Trying to get logs from node jerma-worker pod pod-00e9255c-19dd-451d-a39d-1c88bb308549 container test-container: STEP: delete the pod May 11 21:33:04.797: INFO: Waiting for pod pod-00e9255c-19dd-451d-a39d-1c88bb308549 to disappear May 11 21:33:04.870: INFO: Pod pod-00e9255c-19dd-451d-a39d-1c88bb308549 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:33:04.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5694" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":137,"skipped":2350,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:33:04.881: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1525 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 11 21:33:05.057: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-491' May 11 21:33:05.144: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 11 21:33:05.144: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n" STEP: verifying the rc e2e-test-httpd-rc was created STEP: verifying the pod controlled by rc e2e-test-httpd-rc was created STEP: confirm that you can get logs from an rc May 11 21:33:05.373: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-httpd-rc-pqkx2] May 11 21:33:05.373: INFO: Waiting up to 5m0s for pod "e2e-test-httpd-rc-pqkx2" in namespace "kubectl-491" to be "running and ready" May 11 21:33:05.375: INFO: Pod "e2e-test-httpd-rc-pqkx2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.125442ms May 11 21:33:07.451: INFO: Pod "e2e-test-httpd-rc-pqkx2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.077882081s May 11 21:33:09.454: INFO: Pod "e2e-test-httpd-rc-pqkx2": Phase="Running", Reason="", readiness=true. Elapsed: 4.081523572s May 11 21:33:09.454: INFO: Pod "e2e-test-httpd-rc-pqkx2" satisfied condition "running and ready" May 11 21:33:09.454: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-httpd-rc-pqkx2] May 11 21:33:09.455: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-httpd-rc --namespace=kubectl-491' May 11 21:33:09.552: INFO: stderr: "" May 11 21:33:09.552: INFO: stdout: "AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.1.178. Set the 'ServerName' directive globally to suppress this message\nAH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.1.178. Set the 'ServerName' directive globally to suppress this message\n[Mon May 11 21:33:08.158555 2020] [mpm_event:notice] [pid 1:tid 139692212345704] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations\n[Mon May 11 21:33:08.158618 2020] [core:notice] [pid 1:tid 139692212345704] AH00094: Command line: 'httpd -D FOREGROUND'\n" [AfterEach] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1530 May 11 21:33:09.552: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-491' May 11 21:33:09.656: INFO: stderr: "" May 11 21:33:09.656: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:33:09.657: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-491" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run rc should create an rc from an image [Conformance]","total":278,"completed":138,"skipped":2379,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:33:09.702: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod liveness-0afbfafb-da95-485a-8cac-6b7e56732276 in namespace container-probe-8320 May 11 21:33:13.804: INFO: Started pod liveness-0afbfafb-da95-485a-8cac-6b7e56732276 in namespace container-probe-8320 STEP: checking the pod's current state and verifying that restartCount is present May 11 21:33:13.807: INFO: Initial restart count of pod liveness-0afbfafb-da95-485a-8cac-6b7e56732276 is 0 May 11 21:33:43.287: INFO: Restart count of pod container-probe-8320/liveness-0afbfafb-da95-485a-8cac-6b7e56732276 is now 1 (29.479198808s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:33:43.341: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8320" for this suite. • [SLOW TEST:33.748 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":139,"skipped":2394,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:33:43.450: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 11 21:33:43.911: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-e48759ce-1a8e-40ef-8d20-fc9b651e8b43" in namespace "security-context-test-2651" to be "success or failure" May 11 21:33:43.984: INFO: Pod "busybox-readonly-false-e48759ce-1a8e-40ef-8d20-fc9b651e8b43": Phase="Pending", Reason="", readiness=false. Elapsed: 73.20083ms May 11 21:33:46.097: INFO: Pod "busybox-readonly-false-e48759ce-1a8e-40ef-8d20-fc9b651e8b43": Phase="Pending", Reason="", readiness=false. Elapsed: 2.186137253s May 11 21:33:48.101: INFO: Pod "busybox-readonly-false-e48759ce-1a8e-40ef-8d20-fc9b651e8b43": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.189816513s May 11 21:33:48.101: INFO: Pod "busybox-readonly-false-e48759ce-1a8e-40ef-8d20-fc9b651e8b43" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:33:48.101: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-2651" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":278,"completed":140,"skipped":2404,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:33:48.110: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 11 21:33:48.481: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d1d5182f-767a-49d2-aaa8-36318d21f8c3" in namespace "downward-api-2793" to be "success or failure" May 11 21:33:48.529: INFO: Pod "downwardapi-volume-d1d5182f-767a-49d2-aaa8-36318d21f8c3": Phase="Pending", Reason="", readiness=false. Elapsed: 47.803373ms May 11 21:33:50.533: INFO: Pod "downwardapi-volume-d1d5182f-767a-49d2-aaa8-36318d21f8c3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052380685s May 11 21:33:52.536: INFO: Pod "downwardapi-volume-d1d5182f-767a-49d2-aaa8-36318d21f8c3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.055239765s STEP: Saw pod success May 11 21:33:52.536: INFO: Pod "downwardapi-volume-d1d5182f-767a-49d2-aaa8-36318d21f8c3" satisfied condition "success or failure" May 11 21:33:52.539: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-d1d5182f-767a-49d2-aaa8-36318d21f8c3 container client-container: STEP: delete the pod May 11 21:33:52.632: INFO: Waiting for pod downwardapi-volume-d1d5182f-767a-49d2-aaa8-36318d21f8c3 to disappear May 11 21:33:52.643: INFO: Pod downwardapi-volume-d1d5182f-767a-49d2-aaa8-36318d21f8c3 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:33:52.643: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2793" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":141,"skipped":2475,"failed":0} SSSS ------------------------------ [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:33:52.652: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1275 STEP: creating the pod May 11 21:33:52.712: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4814' May 11 21:33:56.351: INFO: stderr: "" May 11 21:33:56.351: INFO: stdout: "pod/pause created\n" May 11 21:33:56.351: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] May 11 21:33:56.351: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-4814" to be "running and ready" May 11 21:33:56.410: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 59.394436ms May 11 21:33:58.414: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06263372s May 11 21:34:00.422: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.070752101s May 11 21:34:00.422: INFO: Pod "pause" satisfied condition "running and ready" May 11 21:34:00.422: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: adding the label testing-label with value testing-label-value to a pod May 11 21:34:00.422: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-4814' May 11 21:34:00.506: INFO: stderr: "" May 11 21:34:00.506: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value May 11 21:34:00.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-4814' May 11 21:34:00.635: INFO: stderr: "" May 11 21:34:00.635: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s testing-label-value\n" STEP: removing the label testing-label of a pod May 11 21:34:00.635: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-4814' May 11 21:34:00.719: INFO: stderr: "" May 11 21:34:00.719: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label May 11 21:34:00.719: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-4814' May 11 21:34:00.817: INFO: stderr: "" May 11 21:34:00.817: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s \n" [AfterEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1282 STEP: using delete to clean up resources May 11 21:34:00.817: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4814' May 11 21:34:00.932: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 11 21:34:00.933: INFO: stdout: "pod \"pause\" force deleted\n" May 11 21:34:00.933: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-4814' May 11 21:34:01.028: INFO: stderr: "No resources found in kubectl-4814 namespace.\n" May 11 21:34:01.028: INFO: stdout: "" May 11 21:34:01.028: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-4814 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 11 21:34:01.125: INFO: stderr: "" May 11 21:34:01.125: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:34:01.125: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4814" for this suite. • [SLOW TEST:8.480 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1272 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":278,"completed":142,"skipped":2479,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:34:01.132: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 11 21:34:02.443: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 11 21:34:05.808: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724829642, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724829642, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724829642, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724829642, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 21:34:07.811: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724829642, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724829642, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724829642, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724829642, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 11 21:34:10.913: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:34:11.026: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4064" for this suite. STEP: Destroying namespace "webhook-4064-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.970 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":278,"completed":143,"skipped":2497,"failed":0} SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:34:11.102: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on tmpfs May 11 21:34:11.174: INFO: Waiting up to 5m0s for pod "pod-b707d320-a831-4ddb-abef-d217c8692d41" in namespace "emptydir-727" to be "success or failure" May 11 21:34:11.178: INFO: Pod "pod-b707d320-a831-4ddb-abef-d217c8692d41": Phase="Pending", Reason="", readiness=false. Elapsed: 3.971513ms May 11 21:34:13.183: INFO: Pod "pod-b707d320-a831-4ddb-abef-d217c8692d41": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008425805s May 11 21:34:15.186: INFO: Pod "pod-b707d320-a831-4ddb-abef-d217c8692d41": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012135765s May 11 21:34:17.191: INFO: Pod "pod-b707d320-a831-4ddb-abef-d217c8692d41": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.016650466s STEP: Saw pod success May 11 21:34:17.191: INFO: Pod "pod-b707d320-a831-4ddb-abef-d217c8692d41" satisfied condition "success or failure" May 11 21:34:17.194: INFO: Trying to get logs from node jerma-worker pod pod-b707d320-a831-4ddb-abef-d217c8692d41 container test-container: STEP: delete the pod May 11 21:34:17.244: INFO: Waiting for pod pod-b707d320-a831-4ddb-abef-d217c8692d41 to disappear May 11 21:34:17.247: INFO: Pod pod-b707d320-a831-4ddb-abef-d217c8692d41 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:34:17.247: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-727" for this suite. • [SLOW TEST:6.152 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":144,"skipped":2504,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:34:17.255: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 11 21:34:17.296: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties May 11 21:34:19.774: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-442 create -f -' May 11 21:34:23.657: INFO: stderr: "" May 11 21:34:23.657: INFO: stdout: "e2e-test-crd-publish-openapi-3042-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" May 11 21:34:23.657: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-442 delete e2e-test-crd-publish-openapi-3042-crds test-cr' May 11 21:34:23.755: INFO: stderr: "" May 11 21:34:23.755: INFO: stdout: "e2e-test-crd-publish-openapi-3042-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" May 11 21:34:23.755: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-442 apply -f -' May 11 21:34:24.017: INFO: stderr: "" May 11 21:34:24.017: INFO: stdout: "e2e-test-crd-publish-openapi-3042-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" May 11 21:34:24.017: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-442 delete e2e-test-crd-publish-openapi-3042-crds test-cr' May 11 21:34:24.153: INFO: stderr: "" May 11 21:34:24.153: INFO: stdout: "e2e-test-crd-publish-openapi-3042-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR May 11 21:34:24.153: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-3042-crds' May 11 21:34:24.395: INFO: stderr: "" May 11 21:34:24.395: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-3042-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:34:27.253: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-442" for this suite. • [SLOW TEST:10.017 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":278,"completed":145,"skipped":2518,"failed":0} SSSSSSSS ------------------------------ [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:34:27.272: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating all guestbook components May 11 21:34:27.352: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-slave labels: app: agnhost role: slave tier: backend spec: ports: - port: 6379 selector: app: agnhost role: slave tier: backend May 11 21:34:27.352: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2271' May 11 21:34:27.765: INFO: stderr: "" May 11 21:34:27.765: INFO: stdout: "service/agnhost-slave created\n" May 11 21:34:27.766: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-master labels: app: agnhost role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: master tier: backend May 11 21:34:27.766: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2271' May 11 21:34:28.120: INFO: stderr: "" May 11 21:34:28.120: INFO: stdout: "service/agnhost-master created\n" May 11 21:34:28.120: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend May 11 21:34:28.120: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2271' May 11 21:34:28.528: INFO: stderr: "" May 11 21:34:28.528: INFO: stdout: "service/frontend created\n" May 11 21:34:28.529: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 May 11 21:34:28.529: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2271' May 11 21:34:28.837: INFO: stderr: "" May 11 21:34:28.837: INFO: stdout: "deployment.apps/frontend created\n" May 11 21:34:28.838: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-master spec: replicas: 1 selector: matchLabels: app: agnhost role: master tier: backend template: metadata: labels: app: agnhost role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 May 11 21:34:28.838: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2271' May 11 21:34:29.196: INFO: stderr: "" May 11 21:34:29.196: INFO: stdout: "deployment.apps/agnhost-master created\n" May 11 21:34:29.196: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-slave spec: replicas: 2 selector: matchLabels: app: agnhost role: slave tier: backend template: metadata: labels: app: agnhost role: slave tier: backend spec: containers: - name: slave image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 May 11 21:34:29.196: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2271' May 11 21:34:29.538: INFO: stderr: "" May 11 21:34:29.538: INFO: stdout: "deployment.apps/agnhost-slave created\n" STEP: validating guestbook app May 11 21:34:29.538: INFO: Waiting for all frontend pods to be Running. May 11 21:34:39.588: INFO: Waiting for frontend to serve content. May 11 21:34:39.596: INFO: Trying to add a new entry to the guestbook. May 11 21:34:39.603: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources May 11 21:34:39.608: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2271' May 11 21:34:39.988: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 11 21:34:39.988: INFO: stdout: "service \"agnhost-slave\" force deleted\n" STEP: using delete to clean up resources May 11 21:34:39.988: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2271' May 11 21:34:40.509: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 11 21:34:40.509: INFO: stdout: "service \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources May 11 21:34:40.509: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2271' May 11 21:34:40.802: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 11 21:34:40.802: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources May 11 21:34:40.803: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2271' May 11 21:34:40.969: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 11 21:34:40.969: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources May 11 21:34:40.970: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2271' May 11 21:34:41.070: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 11 21:34:41.070: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources May 11 21:34:41.070: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2271' May 11 21:34:41.259: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 11 21:34:41.259: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:34:41.259: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2271" for this suite. • [SLOW TEST:14.418 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:380 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":278,"completed":146,"skipped":2526,"failed":0} SSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:34:41.690: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:324 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a replication controller May 11 21:34:41.872: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8142' May 11 21:34:42.547: INFO: stderr: "" May 11 21:34:42.547: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 11 21:34:42.547: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8142' May 11 21:34:42.910: INFO: stderr: "" May 11 21:34:42.910: INFO: stdout: "update-demo-nautilus-5vrcj update-demo-nautilus-l725n " May 11 21:34:42.910: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5vrcj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8142' May 11 21:34:43.198: INFO: stderr: "" May 11 21:34:43.198: INFO: stdout: "" May 11 21:34:43.198: INFO: update-demo-nautilus-5vrcj is created but not running May 11 21:34:48.198: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8142' May 11 21:34:48.320: INFO: stderr: "" May 11 21:34:48.320: INFO: stdout: "update-demo-nautilus-5vrcj update-demo-nautilus-l725n " May 11 21:34:48.320: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5vrcj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8142' May 11 21:34:48.446: INFO: stderr: "" May 11 21:34:48.446: INFO: stdout: "true" May 11 21:34:48.446: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5vrcj -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8142' May 11 21:34:48.535: INFO: stderr: "" May 11 21:34:48.535: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 11 21:34:48.535: INFO: validating pod update-demo-nautilus-5vrcj May 11 21:34:48.539: INFO: got data: { "image": "nautilus.jpg" } May 11 21:34:48.539: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 11 21:34:48.539: INFO: update-demo-nautilus-5vrcj is verified up and running May 11 21:34:48.539: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-l725n -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8142' May 11 21:34:48.634: INFO: stderr: "" May 11 21:34:48.634: INFO: stdout: "true" May 11 21:34:48.634: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-l725n -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8142' May 11 21:34:48.726: INFO: stderr: "" May 11 21:34:48.726: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 11 21:34:48.727: INFO: validating pod update-demo-nautilus-l725n May 11 21:34:48.729: INFO: got data: { "image": "nautilus.jpg" } May 11 21:34:48.729: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 11 21:34:48.729: INFO: update-demo-nautilus-l725n is verified up and running STEP: using delete to clean up resources May 11 21:34:48.730: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8142' May 11 21:34:48.831: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 11 21:34:48.831: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 11 21:34:48.831: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-8142' May 11 21:34:48.929: INFO: stderr: "No resources found in kubectl-8142 namespace.\n" May 11 21:34:48.929: INFO: stdout: "" May 11 21:34:48.929: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-8142 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 11 21:34:49.030: INFO: stderr: "" May 11 21:34:49.030: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:34:49.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8142" for this suite. • [SLOW TEST:7.345 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:322 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":278,"completed":147,"skipped":2531,"failed":0} SSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:34:49.036: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:324 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the initial replication controller May 11 21:34:49.314: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1183' May 11 21:34:49.655: INFO: stderr: "" May 11 21:34:49.655: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 11 21:34:49.655: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1183' May 11 21:34:49.764: INFO: stderr: "" May 11 21:34:49.764: INFO: stdout: "" STEP: Replicas for name=update-demo: expected=2 actual=0 May 11 21:34:54.765: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1183' May 11 21:34:54.945: INFO: stderr: "" May 11 21:34:54.945: INFO: stdout: "update-demo-nautilus-dpn94 update-demo-nautilus-vgskp " May 11 21:34:54.945: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dpn94 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1183' May 11 21:34:55.112: INFO: stderr: "" May 11 21:34:55.112: INFO: stdout: "" May 11 21:34:55.112: INFO: update-demo-nautilus-dpn94 is created but not running May 11 21:35:00.112: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1183' May 11 21:35:00.208: INFO: stderr: "" May 11 21:35:00.208: INFO: stdout: "update-demo-nautilus-dpn94 update-demo-nautilus-vgskp " May 11 21:35:00.208: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dpn94 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1183' May 11 21:35:00.307: INFO: stderr: "" May 11 21:35:00.307: INFO: stdout: "true" May 11 21:35:00.307: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dpn94 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1183' May 11 21:35:00.401: INFO: stderr: "" May 11 21:35:00.401: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 11 21:35:00.401: INFO: validating pod update-demo-nautilus-dpn94 May 11 21:35:00.404: INFO: got data: { "image": "nautilus.jpg" } May 11 21:35:00.404: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 11 21:35:00.404: INFO: update-demo-nautilus-dpn94 is verified up and running May 11 21:35:00.404: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vgskp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1183' May 11 21:35:00.492: INFO: stderr: "" May 11 21:35:00.492: INFO: stdout: "true" May 11 21:35:00.492: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vgskp -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1183' May 11 21:35:00.589: INFO: stderr: "" May 11 21:35:00.589: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 11 21:35:00.589: INFO: validating pod update-demo-nautilus-vgskp May 11 21:35:00.592: INFO: got data: { "image": "nautilus.jpg" } May 11 21:35:00.592: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 11 21:35:00.593: INFO: update-demo-nautilus-vgskp is verified up and running STEP: rolling-update to new replication controller May 11 21:35:00.595: INFO: scanned /root for discovery docs: May 11 21:35:00.595: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-1183' May 11 21:35:29.863: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" May 11 21:35:29.863: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. May 11 21:35:29.863: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1183' May 11 21:35:30.012: INFO: stderr: "" May 11 21:35:30.012: INFO: stdout: "update-demo-kitten-kgtcr update-demo-kitten-vnkjf " May 11 21:35:30.012: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-kgtcr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1183' May 11 21:35:30.097: INFO: stderr: "" May 11 21:35:30.097: INFO: stdout: "true" May 11 21:35:30.097: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-kgtcr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1183' May 11 21:35:30.175: INFO: stderr: "" May 11 21:35:30.175: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" May 11 21:35:30.175: INFO: validating pod update-demo-kitten-kgtcr May 11 21:35:30.178: INFO: got data: { "image": "kitten.jpg" } May 11 21:35:30.178: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . May 11 21:35:30.178: INFO: update-demo-kitten-kgtcr is verified up and running May 11 21:35:30.179: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-vnkjf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1183' May 11 21:35:30.268: INFO: stderr: "" May 11 21:35:30.268: INFO: stdout: "true" May 11 21:35:30.268: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-vnkjf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1183' May 11 21:35:30.363: INFO: stderr: "" May 11 21:35:30.363: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" May 11 21:35:30.363: INFO: validating pod update-demo-kitten-vnkjf May 11 21:35:30.366: INFO: got data: { "image": "kitten.jpg" } May 11 21:35:30.366: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . May 11 21:35:30.366: INFO: update-demo-kitten-vnkjf is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:35:30.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1183" for this suite. • [SLOW TEST:41.336 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:322 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller [Conformance]","total":278,"completed":148,"skipped":2537,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:35:30.372: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service externalname-service with the type=ExternalName in namespace services-1504 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-1504 I0511 21:35:32.363927 6 runners.go:189] Created replication controller with name: externalname-service, namespace: services-1504, replica count: 2 I0511 21:35:35.414344 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0511 21:35:38.414551 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 11 21:35:38.414: INFO: Creating new exec pod May 11 21:35:43.540: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-1504 execpodx6qnb -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' May 11 21:35:44.380: INFO: stderr: "I0511 21:35:44.315262 1969 log.go:172] (0xc00002cd10) (0xc0006b5cc0) Create stream\nI0511 21:35:44.315317 1969 log.go:172] (0xc00002cd10) (0xc0006b5cc0) Stream added, broadcasting: 1\nI0511 21:35:44.317577 1969 log.go:172] (0xc00002cd10) Reply frame received for 1\nI0511 21:35:44.317602 1969 log.go:172] (0xc00002cd10) (0xc0006785a0) Create stream\nI0511 21:35:44.317607 1969 log.go:172] (0xc00002cd10) (0xc0006785a0) Stream added, broadcasting: 3\nI0511 21:35:44.318251 1969 log.go:172] (0xc00002cd10) Reply frame received for 3\nI0511 21:35:44.318265 1969 log.go:172] (0xc00002cd10) (0xc0006b5d60) Create stream\nI0511 21:35:44.318271 1969 log.go:172] (0xc00002cd10) (0xc0006b5d60) Stream added, broadcasting: 5\nI0511 21:35:44.318895 1969 log.go:172] (0xc00002cd10) Reply frame received for 5\nI0511 21:35:44.373710 1969 log.go:172] (0xc00002cd10) Data frame received for 5\nI0511 21:35:44.373749 1969 log.go:172] (0xc0006b5d60) (5) Data frame handling\nI0511 21:35:44.373773 1969 log.go:172] (0xc0006b5d60) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0511 21:35:44.373794 1969 log.go:172] (0xc00002cd10) Data frame received for 5\nI0511 21:35:44.373811 1969 log.go:172] (0xc0006b5d60) (5) Data frame handling\nI0511 21:35:44.373831 1969 log.go:172] (0xc0006b5d60) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0511 21:35:44.373844 1969 log.go:172] (0xc00002cd10) Data frame received for 3\nI0511 21:35:44.373856 1969 log.go:172] (0xc0006785a0) (3) Data frame handling\nI0511 21:35:44.374024 1969 log.go:172] (0xc00002cd10) Data frame received for 5\nI0511 21:35:44.374039 1969 log.go:172] (0xc0006b5d60) (5) Data frame handling\nI0511 21:35:44.375584 1969 log.go:172] (0xc00002cd10) Data frame received for 1\nI0511 21:35:44.375610 1969 log.go:172] (0xc0006b5cc0) (1) Data frame handling\nI0511 21:35:44.375626 1969 log.go:172] (0xc0006b5cc0) (1) Data frame sent\nI0511 21:35:44.375642 1969 log.go:172] (0xc00002cd10) (0xc0006b5cc0) Stream removed, broadcasting: 1\nI0511 21:35:44.375662 1969 log.go:172] (0xc00002cd10) Go away received\nI0511 21:35:44.375981 1969 log.go:172] (0xc00002cd10) (0xc0006b5cc0) Stream removed, broadcasting: 1\nI0511 21:35:44.376001 1969 log.go:172] (0xc00002cd10) (0xc0006785a0) Stream removed, broadcasting: 3\nI0511 21:35:44.376012 1969 log.go:172] (0xc00002cd10) (0xc0006b5d60) Stream removed, broadcasting: 5\n" May 11 21:35:44.380: INFO: stdout: "" May 11 21:35:44.381: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-1504 execpodx6qnb -- /bin/sh -x -c nc -zv -t -w 2 10.111.114.157 80' May 11 21:35:44.556: INFO: stderr: "I0511 21:35:44.495578 1989 log.go:172] (0xc0001022c0) (0xc0007e7c20) Create stream\nI0511 21:35:44.495619 1989 log.go:172] (0xc0001022c0) (0xc0007e7c20) Stream added, broadcasting: 1\nI0511 21:35:44.497334 1989 log.go:172] (0xc0001022c0) Reply frame received for 1\nI0511 21:35:44.497371 1989 log.go:172] (0xc0001022c0) (0xc0007c6000) Create stream\nI0511 21:35:44.497381 1989 log.go:172] (0xc0001022c0) (0xc0007c6000) Stream added, broadcasting: 3\nI0511 21:35:44.498165 1989 log.go:172] (0xc0001022c0) Reply frame received for 3\nI0511 21:35:44.498199 1989 log.go:172] (0xc0001022c0) (0xc000760000) Create stream\nI0511 21:35:44.498211 1989 log.go:172] (0xc0001022c0) (0xc000760000) Stream added, broadcasting: 5\nI0511 21:35:44.499043 1989 log.go:172] (0xc0001022c0) Reply frame received for 5\nI0511 21:35:44.552066 1989 log.go:172] (0xc0001022c0) Data frame received for 5\nI0511 21:35:44.552080 1989 log.go:172] (0xc000760000) (5) Data frame handling\nI0511 21:35:44.552094 1989 log.go:172] (0xc000760000) (5) Data frame sent\nI0511 21:35:44.552099 1989 log.go:172] (0xc0001022c0) Data frame received for 5\nI0511 21:35:44.552105 1989 log.go:172] (0xc000760000) (5) Data frame handling\n+ nc -zv -t -w 2 10.111.114.157 80\nConnection to 10.111.114.157 80 port [tcp/http] succeeded!\nI0511 21:35:44.552121 1989 log.go:172] (0xc000760000) (5) Data frame sent\nI0511 21:35:44.552130 1989 log.go:172] (0xc0001022c0) Data frame received for 5\nI0511 21:35:44.552134 1989 log.go:172] (0xc000760000) (5) Data frame handling\nI0511 21:35:44.552405 1989 log.go:172] (0xc0001022c0) Data frame received for 3\nI0511 21:35:44.552421 1989 log.go:172] (0xc0007c6000) (3) Data frame handling\nI0511 21:35:44.553619 1989 log.go:172] (0xc0001022c0) Data frame received for 1\nI0511 21:35:44.553631 1989 log.go:172] (0xc0007e7c20) (1) Data frame handling\nI0511 21:35:44.553641 1989 log.go:172] (0xc0007e7c20) (1) Data frame sent\nI0511 21:35:44.553649 1989 log.go:172] (0xc0001022c0) (0xc0007e7c20) Stream removed, broadcasting: 1\nI0511 21:35:44.553706 1989 log.go:172] (0xc0001022c0) Go away received\nI0511 21:35:44.553872 1989 log.go:172] (0xc0001022c0) (0xc0007e7c20) Stream removed, broadcasting: 1\nI0511 21:35:44.553883 1989 log.go:172] (0xc0001022c0) (0xc0007c6000) Stream removed, broadcasting: 3\nI0511 21:35:44.553888 1989 log.go:172] (0xc0001022c0) (0xc000760000) Stream removed, broadcasting: 5\n" May 11 21:35:44.556: INFO: stdout: "" May 11 21:35:44.556: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-1504 execpodx6qnb -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.10 32483' May 11 21:35:44.926: INFO: stderr: "I0511 21:35:44.845715 2004 log.go:172] (0xc0008dd4a0) (0xc00097e820) Create stream\nI0511 21:35:44.845784 2004 log.go:172] (0xc0008dd4a0) (0xc00097e820) Stream added, broadcasting: 1\nI0511 21:35:44.849005 2004 log.go:172] (0xc0008dd4a0) Reply frame received for 1\nI0511 21:35:44.849061 2004 log.go:172] (0xc0008dd4a0) (0xc0005146e0) Create stream\nI0511 21:35:44.849077 2004 log.go:172] (0xc0008dd4a0) (0xc0005146e0) Stream added, broadcasting: 3\nI0511 21:35:44.850059 2004 log.go:172] (0xc0008dd4a0) Reply frame received for 3\nI0511 21:35:44.850102 2004 log.go:172] (0xc0008dd4a0) (0xc0003114a0) Create stream\nI0511 21:35:44.850115 2004 log.go:172] (0xc0008dd4a0) (0xc0003114a0) Stream added, broadcasting: 5\nI0511 21:35:44.850985 2004 log.go:172] (0xc0008dd4a0) Reply frame received for 5\nI0511 21:35:44.921910 2004 log.go:172] (0xc0008dd4a0) Data frame received for 5\nI0511 21:35:44.921946 2004 log.go:172] (0xc0008dd4a0) Data frame received for 3\nI0511 21:35:44.921976 2004 log.go:172] (0xc0005146e0) (3) Data frame handling\nI0511 21:35:44.922005 2004 log.go:172] (0xc0003114a0) (5) Data frame handling\nI0511 21:35:44.922017 2004 log.go:172] (0xc0003114a0) (5) Data frame sent\nI0511 21:35:44.922026 2004 log.go:172] (0xc0008dd4a0) Data frame received for 5\nI0511 21:35:44.922035 2004 log.go:172] (0xc0003114a0) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.10 32483\nConnection to 172.17.0.10 32483 port [tcp/32483] succeeded!\nI0511 21:35:44.922830 2004 log.go:172] (0xc0008dd4a0) Data frame received for 1\nI0511 21:35:44.922909 2004 log.go:172] (0xc00097e820) (1) Data frame handling\nI0511 21:35:44.922942 2004 log.go:172] (0xc00097e820) (1) Data frame sent\nI0511 21:35:44.922971 2004 log.go:172] (0xc0008dd4a0) (0xc00097e820) Stream removed, broadcasting: 1\nI0511 21:35:44.922995 2004 log.go:172] (0xc0008dd4a0) Go away received\nI0511 21:35:44.923295 2004 log.go:172] (0xc0008dd4a0) (0xc00097e820) Stream removed, broadcasting: 1\nI0511 21:35:44.923310 2004 log.go:172] (0xc0008dd4a0) (0xc0005146e0) Stream removed, broadcasting: 3\nI0511 21:35:44.923318 2004 log.go:172] (0xc0008dd4a0) (0xc0003114a0) Stream removed, broadcasting: 5\n" May 11 21:35:44.926: INFO: stdout: "" May 11 21:35:44.926: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-1504 execpodx6qnb -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.8 32483' May 11 21:35:45.115: INFO: stderr: "I0511 21:35:45.031903 2022 log.go:172] (0xc0000f4420) (0xc0006bdf40) Create stream\nI0511 21:35:45.031939 2022 log.go:172] (0xc0000f4420) (0xc0006bdf40) Stream added, broadcasting: 1\nI0511 21:35:45.033518 2022 log.go:172] (0xc0000f4420) Reply frame received for 1\nI0511 21:35:45.033551 2022 log.go:172] (0xc0000f4420) (0xc0005ee8c0) Create stream\nI0511 21:35:45.033561 2022 log.go:172] (0xc0000f4420) (0xc0005ee8c0) Stream added, broadcasting: 3\nI0511 21:35:45.034224 2022 log.go:172] (0xc0000f4420) Reply frame received for 3\nI0511 21:35:45.034254 2022 log.go:172] (0xc0000f4420) (0xc0006d35e0) Create stream\nI0511 21:35:45.034275 2022 log.go:172] (0xc0000f4420) (0xc0006d35e0) Stream added, broadcasting: 5\nI0511 21:35:45.034902 2022 log.go:172] (0xc0000f4420) Reply frame received for 5\nI0511 21:35:45.099512 2022 log.go:172] (0xc0000f4420) Data frame received for 3\nI0511 21:35:45.099533 2022 log.go:172] (0xc0005ee8c0) (3) Data frame handling\nI0511 21:35:45.099717 2022 log.go:172] (0xc0000f4420) Data frame received for 5\nI0511 21:35:45.099728 2022 log.go:172] (0xc0006d35e0) (5) Data frame handling\nI0511 21:35:45.099737 2022 log.go:172] (0xc0006d35e0) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.8 32483\nConnection to 172.17.0.8 32483 port [tcp/32483] succeeded!\nI0511 21:35:45.100173 2022 log.go:172] (0xc0000f4420) Data frame received for 5\nI0511 21:35:45.100186 2022 log.go:172] (0xc0006d35e0) (5) Data frame handling\nI0511 21:35:45.112420 2022 log.go:172] (0xc0000f4420) Data frame received for 1\nI0511 21:35:45.112437 2022 log.go:172] (0xc0006bdf40) (1) Data frame handling\nI0511 21:35:45.112445 2022 log.go:172] (0xc0006bdf40) (1) Data frame sent\nI0511 21:35:45.112453 2022 log.go:172] (0xc0000f4420) (0xc0006bdf40) Stream removed, broadcasting: 1\nI0511 21:35:45.112460 2022 log.go:172] (0xc0000f4420) Go away received\nI0511 21:35:45.112693 2022 log.go:172] (0xc0000f4420) (0xc0006bdf40) Stream removed, broadcasting: 1\nI0511 21:35:45.112703 2022 log.go:172] (0xc0000f4420) (0xc0005ee8c0) Stream removed, broadcasting: 3\nI0511 21:35:45.112708 2022 log.go:172] (0xc0000f4420) (0xc0006d35e0) Stream removed, broadcasting: 5\n" May 11 21:35:45.115: INFO: stdout: "" May 11 21:35:45.115: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:35:45.879: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1504" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:15.546 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":278,"completed":149,"skipped":2549,"failed":0} SSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:35:45.918: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:324 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a replication controller May 11 21:35:46.516: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-888' May 11 21:35:47.006: INFO: stderr: "" May 11 21:35:47.006: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 11 21:35:47.006: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-888' May 11 21:35:47.174: INFO: stderr: "" May 11 21:35:47.174: INFO: stdout: "update-demo-nautilus-g228h update-demo-nautilus-gztk9 " May 11 21:35:47.174: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-g228h -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-888' May 11 21:35:47.360: INFO: stderr: "" May 11 21:35:47.360: INFO: stdout: "" May 11 21:35:47.360: INFO: update-demo-nautilus-g228h is created but not running May 11 21:35:52.360: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-888' May 11 21:35:52.847: INFO: stderr: "" May 11 21:35:52.847: INFO: stdout: "update-demo-nautilus-g228h update-demo-nautilus-gztk9 " May 11 21:35:52.847: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-g228h -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-888' May 11 21:35:53.000: INFO: stderr: "" May 11 21:35:53.000: INFO: stdout: "" May 11 21:35:53.000: INFO: update-demo-nautilus-g228h is created but not running May 11 21:35:58.000: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-888' May 11 21:35:58.100: INFO: stderr: "" May 11 21:35:58.100: INFO: stdout: "update-demo-nautilus-g228h update-demo-nautilus-gztk9 " May 11 21:35:58.100: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-g228h -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-888' May 11 21:35:58.179: INFO: stderr: "" May 11 21:35:58.179: INFO: stdout: "true" May 11 21:35:58.179: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-g228h -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-888' May 11 21:35:58.259: INFO: stderr: "" May 11 21:35:58.259: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 11 21:35:58.259: INFO: validating pod update-demo-nautilus-g228h May 11 21:35:58.262: INFO: got data: { "image": "nautilus.jpg" } May 11 21:35:58.262: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 11 21:35:58.262: INFO: update-demo-nautilus-g228h is verified up and running May 11 21:35:58.262: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gztk9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-888' May 11 21:35:58.355: INFO: stderr: "" May 11 21:35:58.355: INFO: stdout: "true" May 11 21:35:58.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gztk9 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-888' May 11 21:35:58.441: INFO: stderr: "" May 11 21:35:58.441: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 11 21:35:58.441: INFO: validating pod update-demo-nautilus-gztk9 May 11 21:35:58.444: INFO: got data: { "image": "nautilus.jpg" } May 11 21:35:58.445: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 11 21:35:58.445: INFO: update-demo-nautilus-gztk9 is verified up and running STEP: scaling down the replication controller May 11 21:35:58.446: INFO: scanned /root for discovery docs: May 11 21:35:58.446: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-888' May 11 21:36:00.273: INFO: stderr: "" May 11 21:36:00.273: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 11 21:36:00.273: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-888' May 11 21:36:00.555: INFO: stderr: "" May 11 21:36:00.555: INFO: stdout: "update-demo-nautilus-g228h update-demo-nautilus-gztk9 " STEP: Replicas for name=update-demo: expected=1 actual=2 May 11 21:36:05.556: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-888' May 11 21:36:05.659: INFO: stderr: "" May 11 21:36:05.659: INFO: stdout: "update-demo-nautilus-g228h update-demo-nautilus-gztk9 " STEP: Replicas for name=update-demo: expected=1 actual=2 May 11 21:36:10.660: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-888' May 11 21:36:10.845: INFO: stderr: "" May 11 21:36:10.845: INFO: stdout: "update-demo-nautilus-g228h update-demo-nautilus-gztk9 " STEP: Replicas for name=update-demo: expected=1 actual=2 May 11 21:36:15.846: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-888' May 11 21:36:15.952: INFO: stderr: "" May 11 21:36:15.952: INFO: stdout: "update-demo-nautilus-gztk9 " May 11 21:36:15.952: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gztk9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-888' May 11 21:36:16.719: INFO: stderr: "" May 11 21:36:16.719: INFO: stdout: "true" May 11 21:36:16.719: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gztk9 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-888' May 11 21:36:16.892: INFO: stderr: "" May 11 21:36:16.892: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 11 21:36:16.892: INFO: validating pod update-demo-nautilus-gztk9 May 11 21:36:16.896: INFO: got data: { "image": "nautilus.jpg" } May 11 21:36:16.896: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 11 21:36:16.896: INFO: update-demo-nautilus-gztk9 is verified up and running STEP: scaling up the replication controller May 11 21:36:16.900: INFO: scanned /root for discovery docs: May 11 21:36:16.900: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-888' May 11 21:36:18.389: INFO: stderr: "" May 11 21:36:18.390: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 11 21:36:18.390: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-888' May 11 21:36:18.481: INFO: stderr: "" May 11 21:36:18.481: INFO: stdout: "update-demo-nautilus-gztk9 update-demo-nautilus-hz76p " May 11 21:36:18.481: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gztk9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-888' May 11 21:36:18.575: INFO: stderr: "" May 11 21:36:18.575: INFO: stdout: "true" May 11 21:36:18.575: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gztk9 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-888' May 11 21:36:18.670: INFO: stderr: "" May 11 21:36:18.670: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 11 21:36:18.670: INFO: validating pod update-demo-nautilus-gztk9 May 11 21:36:18.672: INFO: got data: { "image": "nautilus.jpg" } May 11 21:36:18.672: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 11 21:36:18.672: INFO: update-demo-nautilus-gztk9 is verified up and running May 11 21:36:18.672: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hz76p -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-888' May 11 21:36:18.933: INFO: stderr: "" May 11 21:36:18.933: INFO: stdout: "" May 11 21:36:18.933: INFO: update-demo-nautilus-hz76p is created but not running May 11 21:36:23.933: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-888' May 11 21:36:24.041: INFO: stderr: "" May 11 21:36:24.041: INFO: stdout: "update-demo-nautilus-gztk9 update-demo-nautilus-hz76p " May 11 21:36:24.041: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gztk9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-888' May 11 21:36:24.145: INFO: stderr: "" May 11 21:36:24.145: INFO: stdout: "true" May 11 21:36:24.145: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gztk9 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-888' May 11 21:36:24.652: INFO: stderr: "" May 11 21:36:24.652: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 11 21:36:24.652: INFO: validating pod update-demo-nautilus-gztk9 May 11 21:36:24.946: INFO: got data: { "image": "nautilus.jpg" } May 11 21:36:24.946: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 11 21:36:24.946: INFO: update-demo-nautilus-gztk9 is verified up and running May 11 21:36:24.946: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hz76p -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-888' May 11 21:36:25.036: INFO: stderr: "" May 11 21:36:25.036: INFO: stdout: "true" May 11 21:36:25.036: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hz76p -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-888' May 11 21:36:25.548: INFO: stderr: "" May 11 21:36:25.548: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 11 21:36:25.548: INFO: validating pod update-demo-nautilus-hz76p May 11 21:36:25.552: INFO: got data: { "image": "nautilus.jpg" } May 11 21:36:25.552: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 11 21:36:25.552: INFO: update-demo-nautilus-hz76p is verified up and running STEP: using delete to clean up resources May 11 21:36:25.552: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-888' May 11 21:36:25.727: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 11 21:36:25.727: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 11 21:36:25.727: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-888' May 11 21:36:25.892: INFO: stderr: "No resources found in kubectl-888 namespace.\n" May 11 21:36:25.892: INFO: stdout: "" May 11 21:36:25.892: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-888 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 11 21:36:26.028: INFO: stderr: "" May 11 21:36:26.028: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:36:26.028: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-888" for this suite. • [SLOW TEST:40.144 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:322 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":278,"completed":150,"skipped":2556,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:36:26.063: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test substitution in container's args May 11 21:36:28.310: INFO: Waiting up to 5m0s for pod "var-expansion-33499e59-c4f3-4e21-96da-aa19d8528199" in namespace "var-expansion-7582" to be "success or failure" May 11 21:36:28.314: INFO: Pod "var-expansion-33499e59-c4f3-4e21-96da-aa19d8528199": Phase="Pending", Reason="", readiness=false. Elapsed: 4.178221ms May 11 21:36:30.915: INFO: Pod "var-expansion-33499e59-c4f3-4e21-96da-aa19d8528199": Phase="Pending", Reason="", readiness=false. Elapsed: 2.605166289s May 11 21:36:33.490: INFO: Pod "var-expansion-33499e59-c4f3-4e21-96da-aa19d8528199": Phase="Pending", Reason="", readiness=false. Elapsed: 5.1803103s May 11 21:36:35.640: INFO: Pod "var-expansion-33499e59-c4f3-4e21-96da-aa19d8528199": Phase="Succeeded", Reason="", readiness=false. Elapsed: 7.329500363s STEP: Saw pod success May 11 21:36:35.640: INFO: Pod "var-expansion-33499e59-c4f3-4e21-96da-aa19d8528199" satisfied condition "success or failure" May 11 21:36:35.642: INFO: Trying to get logs from node jerma-worker pod var-expansion-33499e59-c4f3-4e21-96da-aa19d8528199 container dapi-container: STEP: delete the pod May 11 21:36:35.794: INFO: Waiting for pod var-expansion-33499e59-c4f3-4e21-96da-aa19d8528199 to disappear May 11 21:36:35.860: INFO: Pod var-expansion-33499e59-c4f3-4e21-96da-aa19d8528199 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:36:35.860: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-7582" for this suite. • [SLOW TEST:9.805 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":278,"completed":151,"skipped":2572,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:36:35.868: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod May 11 21:36:43.388: INFO: Successfully updated pod "annotationupdated91a70c3-3bcf-475d-8463-19f3325fc4e7" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:36:45.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3879" for this suite. • [SLOW TEST:9.540 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":152,"skipped":2614,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:36:45.409: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-186ac25d-00a4-46e4-8f6b-e83024d58440 STEP: Creating a pod to test consume secrets May 11 21:36:45.963: INFO: Waiting up to 5m0s for pod "pod-secrets-5ee624de-9017-4436-bc21-4102431987d7" in namespace "secrets-8849" to be "success or failure" May 11 21:36:45.997: INFO: Pod "pod-secrets-5ee624de-9017-4436-bc21-4102431987d7": Phase="Pending", Reason="", readiness=false. Elapsed: 34.170464ms May 11 21:36:48.002: INFO: Pod "pod-secrets-5ee624de-9017-4436-bc21-4102431987d7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038222077s May 11 21:36:50.257: INFO: Pod "pod-secrets-5ee624de-9017-4436-bc21-4102431987d7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.293741887s May 11 21:36:52.658: INFO: Pod "pod-secrets-5ee624de-9017-4436-bc21-4102431987d7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.694879203s May 11 21:36:54.795: INFO: Pod "pod-secrets-5ee624de-9017-4436-bc21-4102431987d7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.832014371s STEP: Saw pod success May 11 21:36:54.795: INFO: Pod "pod-secrets-5ee624de-9017-4436-bc21-4102431987d7" satisfied condition "success or failure" May 11 21:36:54.798: INFO: Trying to get logs from node jerma-worker pod pod-secrets-5ee624de-9017-4436-bc21-4102431987d7 container secret-volume-test: STEP: delete the pod May 11 21:36:55.480: INFO: Waiting for pod pod-secrets-5ee624de-9017-4436-bc21-4102431987d7 to disappear May 11 21:36:55.549: INFO: Pod pod-secrets-5ee624de-9017-4436-bc21-4102431987d7 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:36:55.549: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8849" for this suite. • [SLOW TEST:10.278 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":153,"skipped":2673,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:36:55.687: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 11 21:36:55.802: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a53cd0aa-9542-4107-9107-d63a4fd4aabb" in namespace "projected-4951" to be "success or failure" May 11 21:36:55.825: INFO: Pod "downwardapi-volume-a53cd0aa-9542-4107-9107-d63a4fd4aabb": Phase="Pending", Reason="", readiness=false. Elapsed: 22.824335ms May 11 21:36:57.927: INFO: Pod "downwardapi-volume-a53cd0aa-9542-4107-9107-d63a4fd4aabb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.124981249s May 11 21:37:00.170: INFO: Pod "downwardapi-volume-a53cd0aa-9542-4107-9107-d63a4fd4aabb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.367568707s May 11 21:37:02.174: INFO: Pod "downwardapi-volume-a53cd0aa-9542-4107-9107-d63a4fd4aabb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.371667321s STEP: Saw pod success May 11 21:37:02.174: INFO: Pod "downwardapi-volume-a53cd0aa-9542-4107-9107-d63a4fd4aabb" satisfied condition "success or failure" May 11 21:37:02.177: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-a53cd0aa-9542-4107-9107-d63a4fd4aabb container client-container: STEP: delete the pod May 11 21:37:02.659: INFO: Waiting for pod downwardapi-volume-a53cd0aa-9542-4107-9107-d63a4fd4aabb to disappear May 11 21:37:02.675: INFO: Pod downwardapi-volume-a53cd0aa-9542-4107-9107-d63a4fd4aabb no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:37:02.675: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4951" for this suite. • [SLOW TEST:6.995 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":154,"skipped":2684,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:37:02.683: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 11 21:37:02.979: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c4dc09bf-e24a-496d-8ba2-c6190f1f9eec" in namespace "downward-api-7119" to be "success or failure" May 11 21:37:03.036: INFO: Pod "downwardapi-volume-c4dc09bf-e24a-496d-8ba2-c6190f1f9eec": Phase="Pending", Reason="", readiness=false. Elapsed: 57.678192ms May 11 21:37:05.047: INFO: Pod "downwardapi-volume-c4dc09bf-e24a-496d-8ba2-c6190f1f9eec": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06797527s May 11 21:37:07.050: INFO: Pod "downwardapi-volume-c4dc09bf-e24a-496d-8ba2-c6190f1f9eec": Phase="Running", Reason="", readiness=true. Elapsed: 4.071508656s May 11 21:37:09.054: INFO: Pod "downwardapi-volume-c4dc09bf-e24a-496d-8ba2-c6190f1f9eec": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.075380136s STEP: Saw pod success May 11 21:37:09.054: INFO: Pod "downwardapi-volume-c4dc09bf-e24a-496d-8ba2-c6190f1f9eec" satisfied condition "success or failure" May 11 21:37:09.057: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-c4dc09bf-e24a-496d-8ba2-c6190f1f9eec container client-container: STEP: delete the pod May 11 21:37:09.090: INFO: Waiting for pod downwardapi-volume-c4dc09bf-e24a-496d-8ba2-c6190f1f9eec to disappear May 11 21:37:09.112: INFO: Pod downwardapi-volume-c4dc09bf-e24a-496d-8ba2-c6190f1f9eec no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:37:09.112: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7119" for this suite. • [SLOW TEST:6.435 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":278,"completed":155,"skipped":2703,"failed":0} SSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:37:09.118: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 11 21:37:09.226: INFO: Pod name rollover-pod: Found 0 pods out of 1 May 11 21:37:14.239: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 11 21:37:14.239: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready May 11 21:37:16.243: INFO: Creating deployment "test-rollover-deployment" May 11 21:37:16.770: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations May 11 21:37:19.113: INFO: Check revision of new replica set for deployment "test-rollover-deployment" May 11 21:37:19.351: INFO: Ensure that both replica sets have 1 created replica May 11 21:37:19.357: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update May 11 21:37:19.364: INFO: Updating deployment test-rollover-deployment May 11 21:37:19.364: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller May 11 21:37:21.935: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 May 11 21:37:22.892: INFO: Make sure deployment "test-rollover-deployment" is complete May 11 21:37:23.507: INFO: all replica sets need to contain the pod-template-hash label May 11 21:37:23.507: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724829837, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724829837, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724829842, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724829836, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 21:37:25.880: INFO: all replica sets need to contain the pod-template-hash label May 11 21:37:25.880: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724829837, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724829837, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724829842, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724829836, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 21:37:27.733: INFO: all replica sets need to contain the pod-template-hash label May 11 21:37:27.733: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724829837, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724829837, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724829842, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724829836, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 21:37:29.514: INFO: all replica sets need to contain the pod-template-hash label May 11 21:37:29.514: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724829837, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724829837, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724829848, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724829836, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 21:37:31.513: INFO: all replica sets need to contain the pod-template-hash label May 11 21:37:31.513: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724829837, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724829837, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724829848, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724829836, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 21:37:33.514: INFO: all replica sets need to contain the pod-template-hash label May 11 21:37:33.514: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724829837, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724829837, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724829848, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724829836, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 21:37:35.515: INFO: all replica sets need to contain the pod-template-hash label May 11 21:37:35.515: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724829837, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724829837, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724829848, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724829836, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 21:37:37.514: INFO: all replica sets need to contain the pod-template-hash label May 11 21:37:37.514: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724829837, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724829837, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724829848, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724829836, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 21:37:39.646: INFO: May 11 21:37:39.646: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 May 11 21:37:39.655: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-5682 /apis/apps/v1/namespaces/deployment-5682/deployments/test-rollover-deployment 2c21a9e9-f1a0-4cce-b6e3-0800dcea8f11 15361723 2 2020-05-11 21:37:16 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0039f4a88 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-05-11 21:37:17 +0000 UTC,LastTransitionTime:2020-05-11 21:37:17 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-574d6dfbff" has successfully progressed.,LastUpdateTime:2020-05-11 21:37:39 +0000 UTC,LastTransitionTime:2020-05-11 21:37:16 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} May 11 21:37:39.658: INFO: New ReplicaSet "test-rollover-deployment-574d6dfbff" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-574d6dfbff deployment-5682 /apis/apps/v1/namespaces/deployment-5682/replicasets/test-rollover-deployment-574d6dfbff ec7fcd29-ab40-4c8e-9d98-0dfc1cc378c9 15361710 2 2020-05-11 21:37:19 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment 2c21a9e9-f1a0-4cce-b6e3-0800dcea8f11 0xc003a9a887 0xc003a9a888}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 574d6dfbff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003a9a8f8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} May 11 21:37:39.658: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": May 11 21:37:39.658: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-5682 /apis/apps/v1/namespaces/deployment-5682/replicasets/test-rollover-controller c7881a3c-24b7-4dd8-b35b-59134584d08c 15361722 2 2020-05-11 21:37:09 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment 2c21a9e9-f1a0-4cce-b6e3-0800dcea8f11 0xc003a9a7b7 0xc003a9a7b8}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc003a9a818 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 11 21:37:39.658: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-f6c94f66c deployment-5682 /apis/apps/v1/namespaces/deployment-5682/replicasets/test-rollover-deployment-f6c94f66c 5e435258-0207-49dc-a017-df9672bf01f9 15361658 2 2020-05-11 21:37:16 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment 2c21a9e9-f1a0-4cce-b6e3-0800dcea8f11 0xc003a9a960 0xc003a9a961}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: f6c94f66c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003a9a9d8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 11 21:37:39.661: INFO: Pod "test-rollover-deployment-574d6dfbff-4d57d" is available: &Pod{ObjectMeta:{test-rollover-deployment-574d6dfbff-4d57d test-rollover-deployment-574d6dfbff- deployment-5682 /api/v1/namespaces/deployment-5682/pods/test-rollover-deployment-574d6dfbff-4d57d f9aca6fc-a9b1-478b-b30a-96055021533f 15361681 0 2020-05-11 21:37:19 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [{apps/v1 ReplicaSet test-rollover-deployment-574d6dfbff ec7fcd29-ab40-4c8e-9d98-0dfc1cc378c9 0xc003a9af17 0xc003a9af18}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lcqnw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lcqnw,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lcqnw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 21:37:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 21:37:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 21:37:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 21:37:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.139,StartTime:2020-05-11 21:37:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-11 21:37:27 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://e2023ad35aebd2c5cd8c50aa81da3cb68b6559573c0a8cb99c782b4fae1f120d,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.139,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:37:39.661: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-5682" for this suite. • [SLOW TEST:30.551 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":278,"completed":156,"skipped":2708,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:37:39.669: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 11 21:37:41.221: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 11 21:37:43.413: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724829861, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724829861, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724829861, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724829861, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 21:37:45.527: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724829861, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724829861, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724829861, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724829861, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 11 21:37:49.060: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:37:49.941: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5297" for this suite. STEP: Destroying namespace "webhook-5297-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:11.745 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":278,"completed":157,"skipped":2732,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:37:51.415: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars May 11 21:37:51.493: INFO: Waiting up to 5m0s for pod "downward-api-6f1633f4-de13-43c8-b86d-1ca353cb1bd2" in namespace "downward-api-9538" to be "success or failure" May 11 21:37:51.569: INFO: Pod "downward-api-6f1633f4-de13-43c8-b86d-1ca353cb1bd2": Phase="Pending", Reason="", readiness=false. Elapsed: 76.880429ms May 11 21:37:53.575: INFO: Pod "downward-api-6f1633f4-de13-43c8-b86d-1ca353cb1bd2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.082497967s May 11 21:37:55.761: INFO: Pod "downward-api-6f1633f4-de13-43c8-b86d-1ca353cb1bd2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.268448274s May 11 21:37:57.770: INFO: Pod "downward-api-6f1633f4-de13-43c8-b86d-1ca353cb1bd2": Phase="Running", Reason="", readiness=true. Elapsed: 6.277629381s May 11 21:37:59.773: INFO: Pod "downward-api-6f1633f4-de13-43c8-b86d-1ca353cb1bd2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.280848557s STEP: Saw pod success May 11 21:37:59.774: INFO: Pod "downward-api-6f1633f4-de13-43c8-b86d-1ca353cb1bd2" satisfied condition "success or failure" May 11 21:37:59.778: INFO: Trying to get logs from node jerma-worker2 pod downward-api-6f1633f4-de13-43c8-b86d-1ca353cb1bd2 container dapi-container: STEP: delete the pod May 11 21:37:59.798: INFO: Waiting for pod downward-api-6f1633f4-de13-43c8-b86d-1ca353cb1bd2 to disappear May 11 21:37:59.814: INFO: Pod downward-api-6f1633f4-de13-43c8-b86d-1ca353cb1bd2 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:37:59.814: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9538" for this suite. • [SLOW TEST:8.406 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33 should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":278,"completed":158,"skipped":2764,"failed":0} [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:37:59.821: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: validating cluster-info May 11 21:37:59.941: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' May 11 21:38:00.061: INFO: stderr: "" May 11 21:38:00.061: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32770\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32770/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:38:00.061: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5943" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance]","total":278,"completed":159,"skipped":2764,"failed":0} S ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:38:00.076: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on tmpfs May 11 21:38:00.128: INFO: Waiting up to 5m0s for pod "pod-03309e93-dec4-48e3-93f9-5a815ee9d886" in namespace "emptydir-3037" to be "success or failure" May 11 21:38:00.132: INFO: Pod "pod-03309e93-dec4-48e3-93f9-5a815ee9d886": Phase="Pending", Reason="", readiness=false. Elapsed: 3.526253ms May 11 21:38:02.137: INFO: Pod "pod-03309e93-dec4-48e3-93f9-5a815ee9d886": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009243706s May 11 21:38:04.390: INFO: Pod "pod-03309e93-dec4-48e3-93f9-5a815ee9d886": Phase="Pending", Reason="", readiness=false. Elapsed: 4.262008696s May 11 21:38:06.393: INFO: Pod "pod-03309e93-dec4-48e3-93f9-5a815ee9d886": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.265142602s STEP: Saw pod success May 11 21:38:06.393: INFO: Pod "pod-03309e93-dec4-48e3-93f9-5a815ee9d886" satisfied condition "success or failure" May 11 21:38:06.395: INFO: Trying to get logs from node jerma-worker2 pod pod-03309e93-dec4-48e3-93f9-5a815ee9d886 container test-container: STEP: delete the pod May 11 21:38:06.419: INFO: Waiting for pod pod-03309e93-dec4-48e3-93f9-5a815ee9d886 to disappear May 11 21:38:06.437: INFO: Pod pod-03309e93-dec4-48e3-93f9-5a815ee9d886 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:38:06.437: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3037" for this suite. • [SLOW TEST:6.367 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":160,"skipped":2765,"failed":0} SSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:38:06.443: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on node default medium May 11 21:38:06.492: INFO: Waiting up to 5m0s for pod "pod-61710cd9-d1ab-46b8-9c54-0a18d1f2a5a5" in namespace "emptydir-9940" to be "success or failure" May 11 21:38:06.526: INFO: Pod "pod-61710cd9-d1ab-46b8-9c54-0a18d1f2a5a5": Phase="Pending", Reason="", readiness=false. Elapsed: 34.086805ms May 11 21:38:08.587: INFO: Pod "pod-61710cd9-d1ab-46b8-9c54-0a18d1f2a5a5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.095208014s May 11 21:38:10.591: INFO: Pod "pod-61710cd9-d1ab-46b8-9c54-0a18d1f2a5a5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.098898274s STEP: Saw pod success May 11 21:38:10.591: INFO: Pod "pod-61710cd9-d1ab-46b8-9c54-0a18d1f2a5a5" satisfied condition "success or failure" May 11 21:38:10.594: INFO: Trying to get logs from node jerma-worker2 pod pod-61710cd9-d1ab-46b8-9c54-0a18d1f2a5a5 container test-container: STEP: delete the pod May 11 21:38:10.888: INFO: Waiting for pod pod-61710cd9-d1ab-46b8-9c54-0a18d1f2a5a5 to disappear May 11 21:38:11.012: INFO: Pod pod-61710cd9-d1ab-46b8-9c54-0a18d1f2a5a5 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:38:11.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9940" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":161,"skipped":2768,"failed":0} SSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:38:11.031: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod May 11 21:38:17.864: INFO: Successfully updated pod "labelsupdate93121eff-c0e5-4288-a426-9560a9e33d05" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:38:19.955: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2291" for this suite. • [SLOW TEST:8.951 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":162,"skipped":2772,"failed":0} SS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:38:19.982: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir volume type on tmpfs May 11 21:38:20.147: INFO: Waiting up to 5m0s for pod "pod-d7e65d5f-6502-4de3-be9e-e4d18998e257" in namespace "emptydir-274" to be "success or failure" May 11 21:38:20.168: INFO: Pod "pod-d7e65d5f-6502-4de3-be9e-e4d18998e257": Phase="Pending", Reason="", readiness=false. Elapsed: 21.922414ms May 11 21:38:22.172: INFO: Pod "pod-d7e65d5f-6502-4de3-be9e-e4d18998e257": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02510156s May 11 21:38:24.557: INFO: Pod "pod-d7e65d5f-6502-4de3-be9e-e4d18998e257": Phase="Pending", Reason="", readiness=false. Elapsed: 4.410864238s May 11 21:38:26.683: INFO: Pod "pod-d7e65d5f-6502-4de3-be9e-e4d18998e257": Phase="Running", Reason="", readiness=true. Elapsed: 6.536771279s May 11 21:38:28.707: INFO: Pod "pod-d7e65d5f-6502-4de3-be9e-e4d18998e257": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.560446801s STEP: Saw pod success May 11 21:38:28.707: INFO: Pod "pod-d7e65d5f-6502-4de3-be9e-e4d18998e257" satisfied condition "success or failure" May 11 21:38:28.709: INFO: Trying to get logs from node jerma-worker pod pod-d7e65d5f-6502-4de3-be9e-e4d18998e257 container test-container: STEP: delete the pod May 11 21:38:28.800: INFO: Waiting for pod pod-d7e65d5f-6502-4de3-be9e-e4d18998e257 to disappear May 11 21:38:28.856: INFO: Pod pod-d7e65d5f-6502-4de3-be9e-e4d18998e257 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:38:28.856: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-274" for this suite. • [SLOW TEST:8.894 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":163,"skipped":2774,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:38:28.877: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override arguments May 11 21:38:28.985: INFO: Waiting up to 5m0s for pod "client-containers-4c337e27-5682-4ad4-8dbd-9bc567c8a067" in namespace "containers-7447" to be "success or failure" May 11 21:38:29.029: INFO: Pod "client-containers-4c337e27-5682-4ad4-8dbd-9bc567c8a067": Phase="Pending", Reason="", readiness=false. Elapsed: 44.122126ms May 11 21:38:31.109: INFO: Pod "client-containers-4c337e27-5682-4ad4-8dbd-9bc567c8a067": Phase="Pending", Reason="", readiness=false. Elapsed: 2.124144871s May 11 21:38:33.112: INFO: Pod "client-containers-4c337e27-5682-4ad4-8dbd-9bc567c8a067": Phase="Pending", Reason="", readiness=false. Elapsed: 4.126783424s May 11 21:38:35.195: INFO: Pod "client-containers-4c337e27-5682-4ad4-8dbd-9bc567c8a067": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.210288206s STEP: Saw pod success May 11 21:38:35.195: INFO: Pod "client-containers-4c337e27-5682-4ad4-8dbd-9bc567c8a067" satisfied condition "success or failure" May 11 21:38:35.198: INFO: Trying to get logs from node jerma-worker pod client-containers-4c337e27-5682-4ad4-8dbd-9bc567c8a067 container test-container: STEP: delete the pod May 11 21:38:35.248: INFO: Waiting for pod client-containers-4c337e27-5682-4ad4-8dbd-9bc567c8a067 to disappear May 11 21:38:35.323: INFO: Pod client-containers-4c337e27-5682-4ad4-8dbd-9bc567c8a067 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:38:35.323: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-7447" for this suite. • [SLOW TEST:6.454 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":278,"completed":164,"skipped":2800,"failed":0} SSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:38:35.330: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-4604.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-4604.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-4604.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4604.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-4604.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-4604.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-4604.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-4604.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4604.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-4604.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-4604.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-4604.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-4604.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-4604.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-4604.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-4604.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-4604.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4604.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 11 21:38:43.727: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-4604.svc.cluster.local from pod dns-4604/dns-test-9b375240-ffce-4d35-8c7c-0e66ce06701f: the server could not find the requested resource (get pods dns-test-9b375240-ffce-4d35-8c7c-0e66ce06701f) May 11 21:38:43.730: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4604.svc.cluster.local from pod dns-4604/dns-test-9b375240-ffce-4d35-8c7c-0e66ce06701f: the server could not find the requested resource (get pods dns-test-9b375240-ffce-4d35-8c7c-0e66ce06701f) May 11 21:38:43.732: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-4604.svc.cluster.local from pod dns-4604/dns-test-9b375240-ffce-4d35-8c7c-0e66ce06701f: the server could not find the requested resource (get pods dns-test-9b375240-ffce-4d35-8c7c-0e66ce06701f) May 11 21:38:43.735: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4604.svc.cluster.local from pod dns-4604/dns-test-9b375240-ffce-4d35-8c7c-0e66ce06701f: the server could not find the requested resource (get pods dns-test-9b375240-ffce-4d35-8c7c-0e66ce06701f) May 11 21:38:43.741: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4604.svc.cluster.local from pod dns-4604/dns-test-9b375240-ffce-4d35-8c7c-0e66ce06701f: the server could not find the requested resource (get pods dns-test-9b375240-ffce-4d35-8c7c-0e66ce06701f) May 11 21:38:43.743: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4604.svc.cluster.local from pod dns-4604/dns-test-9b375240-ffce-4d35-8c7c-0e66ce06701f: the server could not find the requested resource (get pods dns-test-9b375240-ffce-4d35-8c7c-0e66ce06701f) May 11 21:38:43.746: INFO: Unable to read jessie_udp@dns-test-service-2.dns-4604.svc.cluster.local from pod dns-4604/dns-test-9b375240-ffce-4d35-8c7c-0e66ce06701f: the server could not find the requested resource (get pods dns-test-9b375240-ffce-4d35-8c7c-0e66ce06701f) May 11 21:38:43.748: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-4604.svc.cluster.local from pod dns-4604/dns-test-9b375240-ffce-4d35-8c7c-0e66ce06701f: the server could not find the requested resource (get pods dns-test-9b375240-ffce-4d35-8c7c-0e66ce06701f) May 11 21:38:43.752: INFO: Lookups using dns-4604/dns-test-9b375240-ffce-4d35-8c7c-0e66ce06701f failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4604.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4604.svc.cluster.local wheezy_udp@dns-test-service-2.dns-4604.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4604.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4604.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4604.svc.cluster.local jessie_udp@dns-test-service-2.dns-4604.svc.cluster.local jessie_tcp@dns-test-service-2.dns-4604.svc.cluster.local] May 11 21:38:48.757: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-4604.svc.cluster.local from pod dns-4604/dns-test-9b375240-ffce-4d35-8c7c-0e66ce06701f: the server could not find the requested resource (get pods dns-test-9b375240-ffce-4d35-8c7c-0e66ce06701f) May 11 21:38:48.760: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4604.svc.cluster.local from pod dns-4604/dns-test-9b375240-ffce-4d35-8c7c-0e66ce06701f: the server could not find the requested resource (get pods dns-test-9b375240-ffce-4d35-8c7c-0e66ce06701f) May 11 21:38:48.768: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-4604.svc.cluster.local from pod dns-4604/dns-test-9b375240-ffce-4d35-8c7c-0e66ce06701f: the server could not find the requested resource (get pods dns-test-9b375240-ffce-4d35-8c7c-0e66ce06701f) May 11 21:38:48.771: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4604.svc.cluster.local from pod dns-4604/dns-test-9b375240-ffce-4d35-8c7c-0e66ce06701f: the server could not find the requested resource (get pods dns-test-9b375240-ffce-4d35-8c7c-0e66ce06701f) May 11 21:38:48.797: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4604.svc.cluster.local from pod dns-4604/dns-test-9b375240-ffce-4d35-8c7c-0e66ce06701f: the server could not find the requested resource (get pods dns-test-9b375240-ffce-4d35-8c7c-0e66ce06701f) May 11 21:38:48.800: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4604.svc.cluster.local from pod dns-4604/dns-test-9b375240-ffce-4d35-8c7c-0e66ce06701f: the server could not find the requested resource (get pods dns-test-9b375240-ffce-4d35-8c7c-0e66ce06701f) May 11 21:38:48.803: INFO: Unable to read jessie_udp@dns-test-service-2.dns-4604.svc.cluster.local from pod dns-4604/dns-test-9b375240-ffce-4d35-8c7c-0e66ce06701f: the server could not find the requested resource (get pods dns-test-9b375240-ffce-4d35-8c7c-0e66ce06701f) May 11 21:38:48.806: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-4604.svc.cluster.local from pod dns-4604/dns-test-9b375240-ffce-4d35-8c7c-0e66ce06701f: the server could not find the requested resource (get pods dns-test-9b375240-ffce-4d35-8c7c-0e66ce06701f) May 11 21:38:48.811: INFO: Lookups using dns-4604/dns-test-9b375240-ffce-4d35-8c7c-0e66ce06701f failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4604.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4604.svc.cluster.local wheezy_udp@dns-test-service-2.dns-4604.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4604.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4604.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4604.svc.cluster.local jessie_udp@dns-test-service-2.dns-4604.svc.cluster.local jessie_tcp@dns-test-service-2.dns-4604.svc.cluster.local] May 11 21:38:53.756: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-4604.svc.cluster.local from pod dns-4604/dns-test-9b375240-ffce-4d35-8c7c-0e66ce06701f: the server could not find the requested resource (get pods dns-test-9b375240-ffce-4d35-8c7c-0e66ce06701f) May 11 21:38:53.758: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4604.svc.cluster.local from pod dns-4604/dns-test-9b375240-ffce-4d35-8c7c-0e66ce06701f: the server could not find the requested resource (get pods dns-test-9b375240-ffce-4d35-8c7c-0e66ce06701f) May 11 21:38:53.760: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-4604.svc.cluster.local from pod dns-4604/dns-test-9b375240-ffce-4d35-8c7c-0e66ce06701f: the server could not find the requested resource (get pods dns-test-9b375240-ffce-4d35-8c7c-0e66ce06701f) May 11 21:38:53.762: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4604.svc.cluster.local from pod dns-4604/dns-test-9b375240-ffce-4d35-8c7c-0e66ce06701f: the server could not find the requested resource (get pods dns-test-9b375240-ffce-4d35-8c7c-0e66ce06701f) May 11 21:38:53.767: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4604.svc.cluster.local from pod dns-4604/dns-test-9b375240-ffce-4d35-8c7c-0e66ce06701f: the server could not find the requested resource (get pods dns-test-9b375240-ffce-4d35-8c7c-0e66ce06701f) May 11 21:38:53.769: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4604.svc.cluster.local from pod dns-4604/dns-test-9b375240-ffce-4d35-8c7c-0e66ce06701f: the server could not find the requested resource (get pods dns-test-9b375240-ffce-4d35-8c7c-0e66ce06701f) May 11 21:38:53.771: INFO: Unable to read jessie_udp@dns-test-service-2.dns-4604.svc.cluster.local from pod dns-4604/dns-test-9b375240-ffce-4d35-8c7c-0e66ce06701f: the server could not find the requested resource (get pods dns-test-9b375240-ffce-4d35-8c7c-0e66ce06701f) May 11 21:38:53.773: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-4604.svc.cluster.local from pod dns-4604/dns-test-9b375240-ffce-4d35-8c7c-0e66ce06701f: the server could not find the requested resource (get pods dns-test-9b375240-ffce-4d35-8c7c-0e66ce06701f) May 11 21:38:53.777: INFO: Lookups using dns-4604/dns-test-9b375240-ffce-4d35-8c7c-0e66ce06701f failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4604.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4604.svc.cluster.local wheezy_udp@dns-test-service-2.dns-4604.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4604.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4604.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4604.svc.cluster.local jessie_udp@dns-test-service-2.dns-4604.svc.cluster.local jessie_tcp@dns-test-service-2.dns-4604.svc.cluster.local] May 11 21:38:58.757: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-4604.svc.cluster.local from pod dns-4604/dns-test-9b375240-ffce-4d35-8c7c-0e66ce06701f: the server could not find the requested resource (get pods dns-test-9b375240-ffce-4d35-8c7c-0e66ce06701f) May 11 21:38:58.760: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4604.svc.cluster.local from pod dns-4604/dns-test-9b375240-ffce-4d35-8c7c-0e66ce06701f: the server could not find the requested resource (get pods dns-test-9b375240-ffce-4d35-8c7c-0e66ce06701f) May 11 21:38:58.764: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-4604.svc.cluster.local from pod dns-4604/dns-test-9b375240-ffce-4d35-8c7c-0e66ce06701f: the server could not find the requested resource (get pods dns-test-9b375240-ffce-4d35-8c7c-0e66ce06701f) May 11 21:38:58.767: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4604.svc.cluster.local from pod dns-4604/dns-test-9b375240-ffce-4d35-8c7c-0e66ce06701f: the server could not find the requested resource (get pods dns-test-9b375240-ffce-4d35-8c7c-0e66ce06701f) May 11 21:38:58.778: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4604.svc.cluster.local from pod dns-4604/dns-test-9b375240-ffce-4d35-8c7c-0e66ce06701f: the server could not find the requested resource (get pods dns-test-9b375240-ffce-4d35-8c7c-0e66ce06701f) May 11 21:38:58.781: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4604.svc.cluster.local from pod dns-4604/dns-test-9b375240-ffce-4d35-8c7c-0e66ce06701f: the server could not find the requested resource (get pods dns-test-9b375240-ffce-4d35-8c7c-0e66ce06701f) May 11 21:38:58.785: INFO: Unable to read jessie_udp@dns-test-service-2.dns-4604.svc.cluster.local from pod dns-4604/dns-test-9b375240-ffce-4d35-8c7c-0e66ce06701f: the server could not find the requested resource (get pods dns-test-9b375240-ffce-4d35-8c7c-0e66ce06701f) May 11 21:38:58.789: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-4604.svc.cluster.local from pod dns-4604/dns-test-9b375240-ffce-4d35-8c7c-0e66ce06701f: the server could not find the requested resource (get pods dns-test-9b375240-ffce-4d35-8c7c-0e66ce06701f) May 11 21:38:58.795: INFO: Lookups using dns-4604/dns-test-9b375240-ffce-4d35-8c7c-0e66ce06701f failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4604.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4604.svc.cluster.local wheezy_udp@dns-test-service-2.dns-4604.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4604.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4604.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4604.svc.cluster.local jessie_udp@dns-test-service-2.dns-4604.svc.cluster.local jessie_tcp@dns-test-service-2.dns-4604.svc.cluster.local] May 11 21:39:03.756: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-4604.svc.cluster.local from pod dns-4604/dns-test-9b375240-ffce-4d35-8c7c-0e66ce06701f: the server could not find the requested resource (get pods dns-test-9b375240-ffce-4d35-8c7c-0e66ce06701f) May 11 21:39:03.759: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4604.svc.cluster.local from pod dns-4604/dns-test-9b375240-ffce-4d35-8c7c-0e66ce06701f: the server could not find the requested resource (get pods dns-test-9b375240-ffce-4d35-8c7c-0e66ce06701f) May 11 21:39:03.762: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-4604.svc.cluster.local from pod dns-4604/dns-test-9b375240-ffce-4d35-8c7c-0e66ce06701f: the server could not find the requested resource (get pods dns-test-9b375240-ffce-4d35-8c7c-0e66ce06701f) May 11 21:39:03.764: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4604.svc.cluster.local from pod dns-4604/dns-test-9b375240-ffce-4d35-8c7c-0e66ce06701f: the server could not find the requested resource (get pods dns-test-9b375240-ffce-4d35-8c7c-0e66ce06701f) May 11 21:39:03.770: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4604.svc.cluster.local from pod dns-4604/dns-test-9b375240-ffce-4d35-8c7c-0e66ce06701f: the server could not find the requested resource (get pods dns-test-9b375240-ffce-4d35-8c7c-0e66ce06701f) May 11 21:39:03.772: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4604.svc.cluster.local from pod dns-4604/dns-test-9b375240-ffce-4d35-8c7c-0e66ce06701f: the server could not find the requested resource (get pods dns-test-9b375240-ffce-4d35-8c7c-0e66ce06701f) May 11 21:39:03.775: INFO: Unable to read jessie_udp@dns-test-service-2.dns-4604.svc.cluster.local from pod dns-4604/dns-test-9b375240-ffce-4d35-8c7c-0e66ce06701f: the server could not find the requested resource (get pods dns-test-9b375240-ffce-4d35-8c7c-0e66ce06701f) May 11 21:39:03.776: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-4604.svc.cluster.local from pod dns-4604/dns-test-9b375240-ffce-4d35-8c7c-0e66ce06701f: the server could not find the requested resource (get pods dns-test-9b375240-ffce-4d35-8c7c-0e66ce06701f) May 11 21:39:03.781: INFO: Lookups using dns-4604/dns-test-9b375240-ffce-4d35-8c7c-0e66ce06701f failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4604.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4604.svc.cluster.local wheezy_udp@dns-test-service-2.dns-4604.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4604.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4604.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4604.svc.cluster.local jessie_udp@dns-test-service-2.dns-4604.svc.cluster.local jessie_tcp@dns-test-service-2.dns-4604.svc.cluster.local] May 11 21:39:08.756: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-4604.svc.cluster.local from pod dns-4604/dns-test-9b375240-ffce-4d35-8c7c-0e66ce06701f: the server could not find the requested resource (get pods dns-test-9b375240-ffce-4d35-8c7c-0e66ce06701f) May 11 21:39:08.760: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4604.svc.cluster.local from pod dns-4604/dns-test-9b375240-ffce-4d35-8c7c-0e66ce06701f: the server could not find the requested resource (get pods dns-test-9b375240-ffce-4d35-8c7c-0e66ce06701f) May 11 21:39:08.763: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-4604.svc.cluster.local from pod dns-4604/dns-test-9b375240-ffce-4d35-8c7c-0e66ce06701f: the server could not find the requested resource (get pods dns-test-9b375240-ffce-4d35-8c7c-0e66ce06701f) May 11 21:39:08.766: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4604.svc.cluster.local from pod dns-4604/dns-test-9b375240-ffce-4d35-8c7c-0e66ce06701f: the server could not find the requested resource (get pods dns-test-9b375240-ffce-4d35-8c7c-0e66ce06701f) May 11 21:39:08.773: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4604.svc.cluster.local from pod dns-4604/dns-test-9b375240-ffce-4d35-8c7c-0e66ce06701f: the server could not find the requested resource (get pods dns-test-9b375240-ffce-4d35-8c7c-0e66ce06701f) May 11 21:39:08.776: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4604.svc.cluster.local from pod dns-4604/dns-test-9b375240-ffce-4d35-8c7c-0e66ce06701f: the server could not find the requested resource (get pods dns-test-9b375240-ffce-4d35-8c7c-0e66ce06701f) May 11 21:39:08.779: INFO: Unable to read jessie_udp@dns-test-service-2.dns-4604.svc.cluster.local from pod dns-4604/dns-test-9b375240-ffce-4d35-8c7c-0e66ce06701f: the server could not find the requested resource (get pods dns-test-9b375240-ffce-4d35-8c7c-0e66ce06701f) May 11 21:39:08.781: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-4604.svc.cluster.local from pod dns-4604/dns-test-9b375240-ffce-4d35-8c7c-0e66ce06701f: the server could not find the requested resource (get pods dns-test-9b375240-ffce-4d35-8c7c-0e66ce06701f) May 11 21:39:08.786: INFO: Lookups using dns-4604/dns-test-9b375240-ffce-4d35-8c7c-0e66ce06701f failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4604.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4604.svc.cluster.local wheezy_udp@dns-test-service-2.dns-4604.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4604.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4604.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4604.svc.cluster.local jessie_udp@dns-test-service-2.dns-4604.svc.cluster.local jessie_tcp@dns-test-service-2.dns-4604.svc.cluster.local] May 11 21:39:13.998: INFO: DNS probes using dns-4604/dns-test-9b375240-ffce-4d35-8c7c-0e66ce06701f succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:39:14.727: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4604" for this suite. • [SLOW TEST:39.484 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":278,"completed":165,"skipped":2807,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:39:14.815: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 11 21:39:14.986: INFO: Create a RollingUpdate DaemonSet May 11 21:39:14.990: INFO: Check that daemon pods launch on every node of the cluster May 11 21:39:15.008: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 21:39:15.013: INFO: Number of nodes with available pods: 0 May 11 21:39:15.013: INFO: Node jerma-worker is running more than one daemon pod May 11 21:39:16.158: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 21:39:16.161: INFO: Number of nodes with available pods: 0 May 11 21:39:16.161: INFO: Node jerma-worker is running more than one daemon pod May 11 21:39:17.373: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 21:39:17.433: INFO: Number of nodes with available pods: 0 May 11 21:39:17.433: INFO: Node jerma-worker is running more than one daemon pod May 11 21:39:18.194: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 21:39:18.876: INFO: Number of nodes with available pods: 0 May 11 21:39:18.876: INFO: Node jerma-worker is running more than one daemon pod May 11 21:39:19.277: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 21:39:19.279: INFO: Number of nodes with available pods: 0 May 11 21:39:19.279: INFO: Node jerma-worker is running more than one daemon pod May 11 21:39:20.056: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 21:39:20.116: INFO: Number of nodes with available pods: 0 May 11 21:39:20.116: INFO: Node jerma-worker is running more than one daemon pod May 11 21:39:21.421: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 21:39:21.452: INFO: Number of nodes with available pods: 0 May 11 21:39:21.452: INFO: Node jerma-worker is running more than one daemon pod May 11 21:39:22.160: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 21:39:22.307: INFO: Number of nodes with available pods: 0 May 11 21:39:22.307: INFO: Node jerma-worker is running more than one daemon pod May 11 21:39:23.020: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 21:39:23.023: INFO: Number of nodes with available pods: 2 May 11 21:39:23.023: INFO: Number of running nodes: 2, number of available pods: 2 May 11 21:39:23.023: INFO: Update the DaemonSet to trigger a rollout May 11 21:39:23.029: INFO: Updating DaemonSet daemon-set May 11 21:39:30.195: INFO: Roll back the DaemonSet before rollout is complete May 11 21:39:30.200: INFO: Updating DaemonSet daemon-set May 11 21:39:30.200: INFO: Make sure DaemonSet rollback is complete May 11 21:39:30.248: INFO: Wrong image for pod: daemon-set-nhtk8. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. May 11 21:39:30.248: INFO: Pod daemon-set-nhtk8 is not available May 11 21:39:30.274: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 21:39:31.279: INFO: Wrong image for pod: daemon-set-nhtk8. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. May 11 21:39:31.279: INFO: Pod daemon-set-nhtk8 is not available May 11 21:39:31.283: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 21:39:32.331: INFO: Pod daemon-set-v2k8x is not available May 11 21:39:32.334: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5154, will wait for the garbage collector to delete the pods May 11 21:39:32.401: INFO: Deleting DaemonSet.extensions daemon-set took: 8.041965ms May 11 21:39:32.701: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.260882ms May 11 21:39:36.004: INFO: Number of nodes with available pods: 0 May 11 21:39:36.004: INFO: Number of running nodes: 0, number of available pods: 0 May 11 21:39:36.006: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5154/daemonsets","resourceVersion":"15362430"},"items":null} May 11 21:39:36.008: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5154/pods","resourceVersion":"15362430"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:39:36.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-5154" for this suite. • [SLOW TEST:21.257 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":278,"completed":166,"skipped":2830,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:39:36.072: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating secret secrets-2415/secret-test-a9b7b14c-0117-4b3d-a638-e9dc385b755a STEP: Creating a pod to test consume secrets May 11 21:39:36.227: INFO: Waiting up to 5m0s for pod "pod-configmaps-1b61a6c3-7335-4782-ab43-2a7101ec1f8e" in namespace "secrets-2415" to be "success or failure" May 11 21:39:36.241: INFO: Pod "pod-configmaps-1b61a6c3-7335-4782-ab43-2a7101ec1f8e": Phase="Pending", Reason="", readiness=false. Elapsed: 13.802172ms May 11 21:39:38.245: INFO: Pod "pod-configmaps-1b61a6c3-7335-4782-ab43-2a7101ec1f8e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017088758s May 11 21:39:40.248: INFO: Pod "pod-configmaps-1b61a6c3-7335-4782-ab43-2a7101ec1f8e": Phase="Running", Reason="", readiness=true. Elapsed: 4.020352275s May 11 21:39:42.273: INFO: Pod "pod-configmaps-1b61a6c3-7335-4782-ab43-2a7101ec1f8e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.045094793s STEP: Saw pod success May 11 21:39:42.273: INFO: Pod "pod-configmaps-1b61a6c3-7335-4782-ab43-2a7101ec1f8e" satisfied condition "success or failure" May 11 21:39:42.275: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-1b61a6c3-7335-4782-ab43-2a7101ec1f8e container env-test: STEP: delete the pod May 11 21:39:42.304: INFO: Waiting for pod pod-configmaps-1b61a6c3-7335-4782-ab43-2a7101ec1f8e to disappear May 11 21:39:42.336: INFO: Pod pod-configmaps-1b61a6c3-7335-4782-ab43-2a7101ec1f8e no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:39:42.336: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2415" for this suite. • [SLOW TEST:6.271 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":167,"skipped":2852,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:39:42.343: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-3ea252c1-2082-4b87-9c49-28a6fbc8a4fb STEP: Creating a pod to test consume secrets May 11 21:39:42.423: INFO: Waiting up to 5m0s for pod "pod-secrets-3d028525-c3f0-46d1-aa38-0c7daeb8e1b3" in namespace "secrets-3060" to be "success or failure" May 11 21:39:42.474: INFO: Pod "pod-secrets-3d028525-c3f0-46d1-aa38-0c7daeb8e1b3": Phase="Pending", Reason="", readiness=false. Elapsed: 50.974418ms May 11 21:39:44.479: INFO: Pod "pod-secrets-3d028525-c3f0-46d1-aa38-0c7daeb8e1b3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055476317s May 11 21:39:46.483: INFO: Pod "pod-secrets-3d028525-c3f0-46d1-aa38-0c7daeb8e1b3": Phase="Running", Reason="", readiness=true. Elapsed: 4.059583103s May 11 21:39:48.487: INFO: Pod "pod-secrets-3d028525-c3f0-46d1-aa38-0c7daeb8e1b3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.063962292s STEP: Saw pod success May 11 21:39:48.488: INFO: Pod "pod-secrets-3d028525-c3f0-46d1-aa38-0c7daeb8e1b3" satisfied condition "success or failure" May 11 21:39:48.491: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-3d028525-c3f0-46d1-aa38-0c7daeb8e1b3 container secret-volume-test: STEP: delete the pod May 11 21:39:48.528: INFO: Waiting for pod pod-secrets-3d028525-c3f0-46d1-aa38-0c7daeb8e1b3 to disappear May 11 21:39:48.650: INFO: Pod pod-secrets-3d028525-c3f0-46d1-aa38-0c7daeb8e1b3 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:39:48.650: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3060" for this suite. • [SLOW TEST:6.315 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":168,"skipped":2868,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:39:48.659: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 11 21:39:48.980: INFO: Waiting up to 5m0s for pod "downwardapi-volume-cf1fc9f4-c17a-46d4-8219-00d0962a5c4d" in namespace "projected-1501" to be "success or failure" May 11 21:39:49.134: INFO: Pod "downwardapi-volume-cf1fc9f4-c17a-46d4-8219-00d0962a5c4d": Phase="Pending", Reason="", readiness=false. Elapsed: 153.881018ms May 11 21:39:51.137: INFO: Pod "downwardapi-volume-cf1fc9f4-c17a-46d4-8219-00d0962a5c4d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.157119054s May 11 21:39:53.140: INFO: Pod "downwardapi-volume-cf1fc9f4-c17a-46d4-8219-00d0962a5c4d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.160613307s May 11 21:39:55.144: INFO: Pod "downwardapi-volume-cf1fc9f4-c17a-46d4-8219-00d0962a5c4d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.164303922s STEP: Saw pod success May 11 21:39:55.144: INFO: Pod "downwardapi-volume-cf1fc9f4-c17a-46d4-8219-00d0962a5c4d" satisfied condition "success or failure" May 11 21:39:55.147: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-cf1fc9f4-c17a-46d4-8219-00d0962a5c4d container client-container: STEP: delete the pod May 11 21:39:55.208: INFO: Waiting for pod downwardapi-volume-cf1fc9f4-c17a-46d4-8219-00d0962a5c4d to disappear May 11 21:39:55.218: INFO: Pod downwardapi-volume-cf1fc9f4-c17a-46d4-8219-00d0962a5c4d no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:39:55.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1501" for this suite. • [SLOW TEST:6.564 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":169,"skipped":2899,"failed":0} SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:39:55.223: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1790 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 11 21:39:55.363: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-7448' May 11 21:39:55.465: INFO: stderr: "" May 11 21:39:55.465: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created May 11 21:40:00.515: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-7448 -o json' May 11 21:40:00.598: INFO: stderr: "" May 11 21:40:00.598: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-05-11T21:39:55Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-7448\",\n \"resourceVersion\": \"15362602\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-7448/pods/e2e-test-httpd-pod\",\n \"uid\": \"6f8a3056-b69d-4741-904a-dfc9d0283fd1\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-fmm7m\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"jerma-worker\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-fmm7m\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-fmm7m\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-11T21:39:55Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-11T21:39:58Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-11T21:39:58Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-11T21:39:55Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://e04287ac75e62167dfb2cd23aceb647be3c28756728a911cc34fa96f1aa66970\",\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-05-11T21:39:58Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.10\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.1.203\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.1.203\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-05-11T21:39:55Z\"\n }\n}\n" STEP: replace the image in the pod May 11 21:40:00.598: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-7448' May 11 21:40:01.065: INFO: stderr: "" May 11 21:40:01.065: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29 [AfterEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1795 May 11 21:40:01.128: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-7448' May 11 21:40:09.278: INFO: stderr: "" May 11 21:40:09.278: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:40:09.278: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7448" for this suite. • [SLOW TEST:14.139 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1786 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":278,"completed":170,"skipped":2909,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:40:09.363: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook May 11 21:40:21.668: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 11 21:40:21.690: INFO: Pod pod-with-prestop-exec-hook still exists May 11 21:40:23.690: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 11 21:40:23.695: INFO: Pod pod-with-prestop-exec-hook still exists May 11 21:40:25.690: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 11 21:40:25.823: INFO: Pod pod-with-prestop-exec-hook still exists May 11 21:40:27.690: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 11 21:40:27.695: INFO: Pod pod-with-prestop-exec-hook still exists May 11 21:40:29.690: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 11 21:40:29.694: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:40:29.707: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-1881" for this suite. • [SLOW TEST:20.349 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":278,"completed":171,"skipped":2925,"failed":0} SSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:40:29.713: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:172 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating server pod server in namespace prestop-4412 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-4412 STEP: Deleting pre-stop pod May 11 21:40:45.220: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:40:45.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-4412" for this suite. • [SLOW TEST:15.687 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":278,"completed":172,"skipped":2932,"failed":0} S ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:40:45.400: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted May 11 21:40:53.616: INFO: 10 pods remaining May 11 21:40:53.616: INFO: 10 pods has nil DeletionTimestamp May 11 21:40:53.616: INFO: May 11 21:40:56.406: INFO: 10 pods remaining May 11 21:40:56.406: INFO: 0 pods has nil DeletionTimestamp May 11 21:40:56.406: INFO: May 11 21:40:58.052: INFO: 0 pods remaining May 11 21:40:58.052: INFO: 0 pods has nil DeletionTimestamp May 11 21:40:58.052: INFO: STEP: Gathering metrics W0511 21:41:01.006125 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 11 21:41:01.006: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:41:01.006: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8176" for this suite. • [SLOW TEST:16.173 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":278,"completed":173,"skipped":2933,"failed":0} [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:41:01.573: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-ef1bd790-fd66-4ee8-851e-22b6ddc48748 STEP: Creating a pod to test consume configMaps May 11 21:41:03.736: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-209175fa-40a5-4e60-9812-601b14fa477e" in namespace "projected-7543" to be "success or failure" May 11 21:41:04.105: INFO: Pod "pod-projected-configmaps-209175fa-40a5-4e60-9812-601b14fa477e": Phase="Pending", Reason="", readiness=false. Elapsed: 368.608661ms May 11 21:41:06.109: INFO: Pod "pod-projected-configmaps-209175fa-40a5-4e60-9812-601b14fa477e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.37289842s May 11 21:41:08.147: INFO: Pod "pod-projected-configmaps-209175fa-40a5-4e60-9812-601b14fa477e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.410347954s May 11 21:41:10.151: INFO: Pod "pod-projected-configmaps-209175fa-40a5-4e60-9812-601b14fa477e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.414750133s STEP: Saw pod success May 11 21:41:10.151: INFO: Pod "pod-projected-configmaps-209175fa-40a5-4e60-9812-601b14fa477e" satisfied condition "success or failure" May 11 21:41:10.154: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-209175fa-40a5-4e60-9812-601b14fa477e container projected-configmap-volume-test: STEP: delete the pod May 11 21:41:10.175: INFO: Waiting for pod pod-projected-configmaps-209175fa-40a5-4e60-9812-601b14fa477e to disappear May 11 21:41:10.192: INFO: Pod pod-projected-configmaps-209175fa-40a5-4e60-9812-601b14fa477e no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:41:10.193: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7543" for this suite. • [SLOW TEST:8.632 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":174,"skipped":2933,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:41:10.206: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:41:27.614: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2837" for this suite. • [SLOW TEST:17.416 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":278,"completed":175,"skipped":2978,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:41:27.623: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0511 21:41:38.037046 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 11 21:41:38.037: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:41:38.037: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-5264" for this suite. • [SLOW TEST:10.419 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":278,"completed":176,"skipped":3009,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:41:38.043: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 11 21:41:39.510: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 11 21:41:41.517: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724830099, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724830099, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724830101, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724830099, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 21:41:43.546: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724830099, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724830099, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724830101, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724830099, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 21:41:45.520: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724830099, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724830099, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724830101, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724830099, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 11 21:41:49.123: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:41:53.272: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6919" for this suite. STEP: Destroying namespace "webhook-6919-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:17.044 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":278,"completed":177,"skipped":3041,"failed":0} S ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:41:55.087: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap configmap-5570/configmap-test-0f3b202a-7402-41a0-9fe0-283e250f1d8c STEP: Creating a pod to test consume configMaps May 11 21:41:55.664: INFO: Waiting up to 5m0s for pod "pod-configmaps-1867a431-9ecc-482d-8622-2fcc2309456d" in namespace "configmap-5570" to be "success or failure" May 11 21:41:55.794: INFO: Pod "pod-configmaps-1867a431-9ecc-482d-8622-2fcc2309456d": Phase="Pending", Reason="", readiness=false. Elapsed: 129.550115ms May 11 21:41:57.798: INFO: Pod "pod-configmaps-1867a431-9ecc-482d-8622-2fcc2309456d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.133315192s May 11 21:41:59.802: INFO: Pod "pod-configmaps-1867a431-9ecc-482d-8622-2fcc2309456d": Phase="Running", Reason="", readiness=true. Elapsed: 4.137270062s May 11 21:42:01.812: INFO: Pod "pod-configmaps-1867a431-9ecc-482d-8622-2fcc2309456d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.147324815s STEP: Saw pod success May 11 21:42:01.812: INFO: Pod "pod-configmaps-1867a431-9ecc-482d-8622-2fcc2309456d" satisfied condition "success or failure" May 11 21:42:01.814: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-1867a431-9ecc-482d-8622-2fcc2309456d container env-test: STEP: delete the pod May 11 21:42:01.858: INFO: Waiting for pod pod-configmaps-1867a431-9ecc-482d-8622-2fcc2309456d to disappear May 11 21:42:01.874: INFO: Pod pod-configmaps-1867a431-9ecc-482d-8622-2fcc2309456d no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:42:01.874: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5570" for this suite. • [SLOW TEST:6.794 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":178,"skipped":3042,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Lease lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:42:01.882: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:42:02.029: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-2054" for this suite. •{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":278,"completed":179,"skipped":3067,"failed":0} SSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:42:02.036: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-9ec7457c-10fb-4e08-a719-b3a9d06bb11f STEP: Creating a pod to test consume configMaps May 11 21:42:02.156: INFO: Waiting up to 5m0s for pod "pod-configmaps-5d263ad1-e38b-4314-964f-ea97cdb95161" in namespace "configmap-3935" to be "success or failure" May 11 21:42:02.291: INFO: Pod "pod-configmaps-5d263ad1-e38b-4314-964f-ea97cdb95161": Phase="Pending", Reason="", readiness=false. Elapsed: 134.466658ms May 11 21:42:04.294: INFO: Pod "pod-configmaps-5d263ad1-e38b-4314-964f-ea97cdb95161": Phase="Pending", Reason="", readiness=false. Elapsed: 2.137791072s May 11 21:42:06.298: INFO: Pod "pod-configmaps-5d263ad1-e38b-4314-964f-ea97cdb95161": Phase="Pending", Reason="", readiness=false. Elapsed: 4.14217517s May 11 21:42:08.708: INFO: Pod "pod-configmaps-5d263ad1-e38b-4314-964f-ea97cdb95161": Phase="Running", Reason="", readiness=true. Elapsed: 6.551265478s May 11 21:42:10.712: INFO: Pod "pod-configmaps-5d263ad1-e38b-4314-964f-ea97cdb95161": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.555411867s STEP: Saw pod success May 11 21:42:10.712: INFO: Pod "pod-configmaps-5d263ad1-e38b-4314-964f-ea97cdb95161" satisfied condition "success or failure" May 11 21:42:10.715: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-5d263ad1-e38b-4314-964f-ea97cdb95161 container configmap-volume-test: STEP: delete the pod May 11 21:42:10.733: INFO: Waiting for pod pod-configmaps-5d263ad1-e38b-4314-964f-ea97cdb95161 to disappear May 11 21:42:10.782: INFO: Pod pod-configmaps-5d263ad1-e38b-4314-964f-ea97cdb95161 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:42:10.782: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3935" for this suite. • [SLOW TEST:8.754 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":180,"skipped":3071,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:42:10.790: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:42:18.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-6894" for this suite. • [SLOW TEST:8.763 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":278,"completed":181,"skipped":3080,"failed":0} SS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:42:19.554: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 May 11 21:42:21.222: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the sample API server. May 11 21:42:22.946: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set May 11 21:42:26.166: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724830143, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724830143, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724830143, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724830142, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 21:42:28.171: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724830143, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724830143, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724830143, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724830142, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 21:42:31.206: INFO: Waited 1.031468625s for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:42:35.348: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-3383" for this suite. • [SLOW TEST:15.799 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]","total":278,"completed":182,"skipped":3082,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:42:35.353: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... May 11 21:42:36.227: INFO: Created pod &Pod{ObjectMeta:{dns-3788 dns-3788 /api/v1/namespaces/dns-3788/pods/dns-3788 740a6145-e6d1-4a19-8a79-b0275c4c1ed8 15363658 0 2020-05-11 21:42:36 +0000 UTC map[] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-j2bl4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-j2bl4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-j2bl4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: Verifying customized DNS suffix list is configured on pod... May 11 21:42:42.252: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-3788 PodName:dns-3788 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 11 21:42:42.252: INFO: >>> kubeConfig: /root/.kube/config I0511 21:42:42.290351 6 log.go:172] (0xc0014c02c0) (0xc0019ee320) Create stream I0511 21:42:42.290428 6 log.go:172] (0xc0014c02c0) (0xc0019ee320) Stream added, broadcasting: 1 I0511 21:42:42.292344 6 log.go:172] (0xc0014c02c0) Reply frame received for 1 I0511 21:42:42.292374 6 log.go:172] (0xc0014c02c0) (0xc0029f9b80) Create stream I0511 21:42:42.292385 6 log.go:172] (0xc0014c02c0) (0xc0029f9b80) Stream added, broadcasting: 3 I0511 21:42:42.293621 6 log.go:172] (0xc0014c02c0) Reply frame received for 3 I0511 21:42:42.293677 6 log.go:172] (0xc0014c02c0) (0xc0019ee3c0) Create stream I0511 21:42:42.293694 6 log.go:172] (0xc0014c02c0) (0xc0019ee3c0) Stream added, broadcasting: 5 I0511 21:42:42.294586 6 log.go:172] (0xc0014c02c0) Reply frame received for 5 I0511 21:42:42.407432 6 log.go:172] (0xc0014c02c0) Data frame received for 3 I0511 21:42:42.407461 6 log.go:172] (0xc0029f9b80) (3) Data frame handling I0511 21:42:42.407477 6 log.go:172] (0xc0029f9b80) (3) Data frame sent I0511 21:42:42.408729 6 log.go:172] (0xc0014c02c0) Data frame received for 5 I0511 21:42:42.408751 6 log.go:172] (0xc0019ee3c0) (5) Data frame handling I0511 21:42:42.408768 6 log.go:172] (0xc0014c02c0) Data frame received for 3 I0511 21:42:42.408778 6 log.go:172] (0xc0029f9b80) (3) Data frame handling I0511 21:42:42.410616 6 log.go:172] (0xc0014c02c0) Data frame received for 1 I0511 21:42:42.410636 6 log.go:172] (0xc0019ee320) (1) Data frame handling I0511 21:42:42.410652 6 log.go:172] (0xc0019ee320) (1) Data frame sent I0511 21:42:42.410669 6 log.go:172] (0xc0014c02c0) (0xc0019ee320) Stream removed, broadcasting: 1 I0511 21:42:42.410716 6 log.go:172] (0xc0014c02c0) Go away received I0511 21:42:42.410738 6 log.go:172] (0xc0014c02c0) (0xc0019ee320) Stream removed, broadcasting: 1 I0511 21:42:42.410782 6 log.go:172] (0xc0014c02c0) (0xc0029f9b80) Stream removed, broadcasting: 3 I0511 21:42:42.410794 6 log.go:172] (0xc0014c02c0) (0xc0019ee3c0) Stream removed, broadcasting: 5 STEP: Verifying customized DNS server is configured on pod... May 11 21:42:42.410: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-3788 PodName:dns-3788 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 11 21:42:42.410: INFO: >>> kubeConfig: /root/.kube/config I0511 21:42:42.436202 6 log.go:172] (0xc001c6cf20) (0xc00237e8c0) Create stream I0511 21:42:42.436229 6 log.go:172] (0xc001c6cf20) (0xc00237e8c0) Stream added, broadcasting: 1 I0511 21:42:42.438063 6 log.go:172] (0xc001c6cf20) Reply frame received for 1 I0511 21:42:42.438091 6 log.go:172] (0xc001c6cf20) (0xc001733540) Create stream I0511 21:42:42.438103 6 log.go:172] (0xc001c6cf20) (0xc001733540) Stream added, broadcasting: 3 I0511 21:42:42.438831 6 log.go:172] (0xc001c6cf20) Reply frame received for 3 I0511 21:42:42.438867 6 log.go:172] (0xc001c6cf20) (0xc0019ee5a0) Create stream I0511 21:42:42.438881 6 log.go:172] (0xc001c6cf20) (0xc0019ee5a0) Stream added, broadcasting: 5 I0511 21:42:42.439690 6 log.go:172] (0xc001c6cf20) Reply frame received for 5 I0511 21:42:42.510799 6 log.go:172] (0xc001c6cf20) Data frame received for 3 I0511 21:42:42.510839 6 log.go:172] (0xc001733540) (3) Data frame handling I0511 21:42:42.510870 6 log.go:172] (0xc001733540) (3) Data frame sent I0511 21:42:42.511365 6 log.go:172] (0xc001c6cf20) Data frame received for 5 I0511 21:42:42.511388 6 log.go:172] (0xc0019ee5a0) (5) Data frame handling I0511 21:42:42.511423 6 log.go:172] (0xc001c6cf20) Data frame received for 3 I0511 21:42:42.511441 6 log.go:172] (0xc001733540) (3) Data frame handling I0511 21:42:42.512845 6 log.go:172] (0xc001c6cf20) Data frame received for 1 I0511 21:42:42.512875 6 log.go:172] (0xc00237e8c0) (1) Data frame handling I0511 21:42:42.512893 6 log.go:172] (0xc00237e8c0) (1) Data frame sent I0511 21:42:42.512913 6 log.go:172] (0xc001c6cf20) (0xc00237e8c0) Stream removed, broadcasting: 1 I0511 21:42:42.512998 6 log.go:172] (0xc001c6cf20) (0xc00237e8c0) Stream removed, broadcasting: 1 I0511 21:42:42.513022 6 log.go:172] (0xc001c6cf20) (0xc001733540) Stream removed, broadcasting: 3 I0511 21:42:42.513037 6 log.go:172] (0xc001c6cf20) (0xc0019ee5a0) Stream removed, broadcasting: 5 May 11 21:42:42.513: INFO: Deleting pod dns-3788... I0511 21:42:42.513583 6 log.go:172] (0xc001c6cf20) Go away received [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:42:42.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3788" for this suite. • [SLOW TEST:7.208 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":278,"completed":183,"skipped":3110,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:42:42.562: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service nodeport-service with the type=NodePort in namespace services-9702 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-9702 STEP: creating replication controller externalsvc in namespace services-9702 I0511 21:42:43.662945 6 runners.go:189] Created replication controller with name: externalsvc, namespace: services-9702, replica count: 2 I0511 21:42:46.713511 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0511 21:42:49.713693 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0511 21:42:52.713872 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName May 11 21:42:52.954: INFO: Creating new exec pod May 11 21:43:01.065: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-9702 execpod8wg8h -- /bin/sh -x -c nslookup nodeport-service' May 11 21:43:01.853: INFO: stderr: "I0511 21:43:01.312302 2732 log.go:172] (0xc000a9ee70) (0xc00080c1e0) Create stream\nI0511 21:43:01.312362 2732 log.go:172] (0xc000a9ee70) (0xc00080c1e0) Stream added, broadcasting: 1\nI0511 21:43:01.315191 2732 log.go:172] (0xc000a9ee70) Reply frame received for 1\nI0511 21:43:01.315231 2732 log.go:172] (0xc000a9ee70) (0xc0005f5040) Create stream\nI0511 21:43:01.315262 2732 log.go:172] (0xc000a9ee70) (0xc0005f5040) Stream added, broadcasting: 3\nI0511 21:43:01.316222 2732 log.go:172] (0xc000a9ee70) Reply frame received for 3\nI0511 21:43:01.316263 2732 log.go:172] (0xc000a9ee70) (0xc00080c280) Create stream\nI0511 21:43:01.316276 2732 log.go:172] (0xc000a9ee70) (0xc00080c280) Stream added, broadcasting: 5\nI0511 21:43:01.317098 2732 log.go:172] (0xc000a9ee70) Reply frame received for 5\nI0511 21:43:01.836669 2732 log.go:172] (0xc000a9ee70) Data frame received for 5\nI0511 21:43:01.836689 2732 log.go:172] (0xc00080c280) (5) Data frame handling\nI0511 21:43:01.836700 2732 log.go:172] (0xc00080c280) (5) Data frame sent\n+ nslookup nodeport-service\nI0511 21:43:01.845605 2732 log.go:172] (0xc000a9ee70) Data frame received for 3\nI0511 21:43:01.845641 2732 log.go:172] (0xc0005f5040) (3) Data frame handling\nI0511 21:43:01.845671 2732 log.go:172] (0xc0005f5040) (3) Data frame sent\nI0511 21:43:01.846496 2732 log.go:172] (0xc000a9ee70) Data frame received for 3\nI0511 21:43:01.846515 2732 log.go:172] (0xc0005f5040) (3) Data frame handling\nI0511 21:43:01.846530 2732 log.go:172] (0xc0005f5040) (3) Data frame sent\nI0511 21:43:01.847034 2732 log.go:172] (0xc000a9ee70) Data frame received for 3\nI0511 21:43:01.847050 2732 log.go:172] (0xc0005f5040) (3) Data frame handling\nI0511 21:43:01.847278 2732 log.go:172] (0xc000a9ee70) Data frame received for 5\nI0511 21:43:01.847301 2732 log.go:172] (0xc00080c280) (5) Data frame handling\nI0511 21:43:01.848652 2732 log.go:172] (0xc000a9ee70) Data frame received for 1\nI0511 21:43:01.848692 2732 log.go:172] (0xc00080c1e0) (1) Data frame handling\nI0511 21:43:01.848722 2732 log.go:172] (0xc00080c1e0) (1) Data frame sent\nI0511 21:43:01.848918 2732 log.go:172] (0xc000a9ee70) (0xc00080c1e0) Stream removed, broadcasting: 1\nI0511 21:43:01.848953 2732 log.go:172] (0xc000a9ee70) Go away received\nI0511 21:43:01.849466 2732 log.go:172] (0xc000a9ee70) (0xc00080c1e0) Stream removed, broadcasting: 1\nI0511 21:43:01.849498 2732 log.go:172] (0xc000a9ee70) (0xc0005f5040) Stream removed, broadcasting: 3\nI0511 21:43:01.849508 2732 log.go:172] (0xc000a9ee70) (0xc00080c280) Stream removed, broadcasting: 5\n" May 11 21:43:01.853: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-9702.svc.cluster.local\tcanonical name = externalsvc.services-9702.svc.cluster.local.\nName:\texternalsvc.services-9702.svc.cluster.local\nAddress: 10.97.234.205\n\n" STEP: deleting ReplicationController externalsvc in namespace services-9702, will wait for the garbage collector to delete the pods May 11 21:43:02.214: INFO: Deleting ReplicationController externalsvc took: 213.09572ms May 11 21:43:02.914: INFO: Terminating ReplicationController externalsvc pods took: 700.20593ms May 11 21:43:10.309: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:43:10.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9702" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:28.888 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":278,"completed":184,"skipped":3123,"failed":0} SSS ------------------------------ [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:43:11.449: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-5407 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-5407 STEP: creating replication controller externalsvc in namespace services-5407 I0511 21:43:13.257987 6 runners.go:189] Created replication controller with name: externalsvc, namespace: services-5407, replica count: 2 I0511 21:43:16.308326 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0511 21:43:19.308489 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0511 21:43:22.308693 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName May 11 21:43:22.375: INFO: Creating new exec pod May 11 21:43:26.388: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-5407 execpodwn9v6 -- /bin/sh -x -c nslookup clusterip-service' May 11 21:43:26.588: INFO: stderr: "I0511 21:43:26.495142 2751 log.go:172] (0xc0009fb290) (0xc0009f86e0) Create stream\nI0511 21:43:26.495202 2751 log.go:172] (0xc0009fb290) (0xc0009f86e0) Stream added, broadcasting: 1\nI0511 21:43:26.497965 2751 log.go:172] (0xc0009fb290) Reply frame received for 1\nI0511 21:43:26.497999 2751 log.go:172] (0xc0009fb290) (0xc00055c820) Create stream\nI0511 21:43:26.498011 2751 log.go:172] (0xc0009fb290) (0xc00055c820) Stream added, broadcasting: 3\nI0511 21:43:26.498589 2751 log.go:172] (0xc0009fb290) Reply frame received for 3\nI0511 21:43:26.498629 2751 log.go:172] (0xc0009fb290) (0xc00079eb40) Create stream\nI0511 21:43:26.498645 2751 log.go:172] (0xc0009fb290) (0xc00079eb40) Stream added, broadcasting: 5\nI0511 21:43:26.499250 2751 log.go:172] (0xc0009fb290) Reply frame received for 5\nI0511 21:43:26.573720 2751 log.go:172] (0xc0009fb290) Data frame received for 5\nI0511 21:43:26.573753 2751 log.go:172] (0xc00079eb40) (5) Data frame handling\nI0511 21:43:26.573792 2751 log.go:172] (0xc00079eb40) (5) Data frame sent\n+ nslookup clusterip-service\nI0511 21:43:26.580901 2751 log.go:172] (0xc0009fb290) Data frame received for 3\nI0511 21:43:26.580918 2751 log.go:172] (0xc00055c820) (3) Data frame handling\nI0511 21:43:26.580933 2751 log.go:172] (0xc00055c820) (3) Data frame sent\nI0511 21:43:26.581998 2751 log.go:172] (0xc0009fb290) Data frame received for 3\nI0511 21:43:26.582018 2751 log.go:172] (0xc00055c820) (3) Data frame handling\nI0511 21:43:26.582033 2751 log.go:172] (0xc00055c820) (3) Data frame sent\nI0511 21:43:26.582439 2751 log.go:172] (0xc0009fb290) Data frame received for 5\nI0511 21:43:26.582457 2751 log.go:172] (0xc00079eb40) (5) Data frame handling\nI0511 21:43:26.582473 2751 log.go:172] (0xc0009fb290) Data frame received for 3\nI0511 21:43:26.582483 2751 log.go:172] (0xc00055c820) (3) Data frame handling\nI0511 21:43:26.584003 2751 log.go:172] (0xc0009fb290) Data frame received for 1\nI0511 21:43:26.584020 2751 log.go:172] (0xc0009f86e0) (1) Data frame handling\nI0511 21:43:26.584033 2751 log.go:172] (0xc0009f86e0) (1) Data frame sent\nI0511 21:43:26.584089 2751 log.go:172] (0xc0009fb290) (0xc0009f86e0) Stream removed, broadcasting: 1\nI0511 21:43:26.584189 2751 log.go:172] (0xc0009fb290) Go away received\nI0511 21:43:26.584308 2751 log.go:172] (0xc0009fb290) (0xc0009f86e0) Stream removed, broadcasting: 1\nI0511 21:43:26.584324 2751 log.go:172] (0xc0009fb290) (0xc00055c820) Stream removed, broadcasting: 3\nI0511 21:43:26.584333 2751 log.go:172] (0xc0009fb290) (0xc00079eb40) Stream removed, broadcasting: 5\n" May 11 21:43:26.588: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-5407.svc.cluster.local\tcanonical name = externalsvc.services-5407.svc.cluster.local.\nName:\texternalsvc.services-5407.svc.cluster.local\nAddress: 10.97.116.82\n\n" STEP: deleting ReplicationController externalsvc in namespace services-5407, will wait for the garbage collector to delete the pods May 11 21:43:26.647: INFO: Deleting ReplicationController externalsvc took: 5.533108ms May 11 21:43:26.747: INFO: Terminating ReplicationController externalsvc pods took: 100.161495ms May 11 21:43:39.443: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:43:39.498: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5407" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:28.399 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":278,"completed":185,"skipped":3126,"failed":0} SSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:43:39.849: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 11 21:44:07.291: INFO: Container started at 2020-05-11 21:43:43 +0000 UTC, pod became ready at 2020-05-11 21:44:03 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:44:07.291: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3633" for this suite. • [SLOW TEST:27.807 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":278,"completed":186,"skipped":3140,"failed":0} SS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:44:07.656: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod May 11 21:44:20.647: INFO: Successfully updated pod "pod-update-f0032874-f503-402a-8f16-5b2e3ebb174c" STEP: verifying the updated pod is in kubernetes May 11 21:44:21.485: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:44:21.485: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8527" for this suite. • [SLOW TEST:13.837 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":278,"completed":187,"skipped":3142,"failed":0} SSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:44:21.494: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change May 11 21:44:24.959: INFO: Pod name pod-release: Found 0 pods out of 1 May 11 21:44:29.970: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:44:31.036: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-3566" for this suite. • [SLOW TEST:10.413 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":278,"completed":188,"skipped":3145,"failed":0} [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:44:31.907: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 May 11 21:44:34.266: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 11 21:44:36.536: INFO: Waiting for terminating namespaces to be deleted... May 11 21:44:36.731: INFO: Logging pods the kubelet thinks is on node jerma-worker before test May 11 21:44:36.744: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 11 21:44:36.744: INFO: Container kindnet-cni ready: true, restart count 0 May 11 21:44:36.744: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 11 21:44:36.744: INFO: Container kube-proxy ready: true, restart count 0 May 11 21:44:36.744: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test May 11 21:44:36.756: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) May 11 21:44:36.756: INFO: Container kube-hunter ready: false, restart count 0 May 11 21:44:36.756: INFO: pod-release-ccq4x from replication-controller-3566 started at 2020-05-11 21:44:26 +0000 UTC (1 container statuses recorded) May 11 21:44:36.756: INFO: Container pod-release ready: true, restart count 0 May 11 21:44:36.756: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 11 21:44:36.756: INFO: Container kindnet-cni ready: true, restart count 0 May 11 21:44:36.756: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) May 11 21:44:36.756: INFO: Container kube-bench ready: false, restart count 0 May 11 21:44:36.756: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 11 21:44:36.756: INFO: Container kube-proxy ready: true, restart count 0 May 11 21:44:36.756: INFO: pod-release-5qfw8 from replication-controller-3566 started at 2020-05-11 21:44:31 +0000 UTC (1 container statuses recorded) May 11 21:44:36.756: INFO: Container pod-release ready: false, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: verifying the node has the label node jerma-worker STEP: verifying the node has the label node jerma-worker2 May 11 21:44:38.702: INFO: Pod kindnet-c5svj requesting resource cpu=100m on Node jerma-worker May 11 21:44:38.702: INFO: Pod kindnet-zk6sq requesting resource cpu=100m on Node jerma-worker2 May 11 21:44:38.702: INFO: Pod kube-proxy-44mlz requesting resource cpu=0m on Node jerma-worker May 11 21:44:38.702: INFO: Pod kube-proxy-75q42 requesting resource cpu=0m on Node jerma-worker2 May 11 21:44:38.702: INFO: Pod pod-release-5qfw8 requesting resource cpu=0m on Node jerma-worker2 May 11 21:44:38.702: INFO: Pod pod-release-ccq4x requesting resource cpu=0m on Node jerma-worker2 STEP: Starting Pods to consume most of the cluster CPU. May 11 21:44:38.702: INFO: Creating a pod which consumes cpu=11130m on Node jerma-worker May 11 21:44:38.835: INFO: Creating a pod which consumes cpu=11130m on Node jerma-worker2 STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-305c335d-026f-4ee6-a82d-fccecb31d82d.160e176dde4d9d47], Reason = [Scheduled], Message = [Successfully assigned sched-pred-9113/filler-pod-305c335d-026f-4ee6-a82d-fccecb31d82d to jerma-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-305c335d-026f-4ee6-a82d-fccecb31d82d.160e176e6f8ae4f5], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-305c335d-026f-4ee6-a82d-fccecb31d82d.160e176f5205b93e], Reason = [Created], Message = [Created container filler-pod-305c335d-026f-4ee6-a82d-fccecb31d82d] STEP: Considering event: Type = [Normal], Name = [filler-pod-305c335d-026f-4ee6-a82d-fccecb31d82d.160e176f73abe78e], Reason = [Started], Message = [Started container filler-pod-305c335d-026f-4ee6-a82d-fccecb31d82d] STEP: Considering event: Type = [Normal], Name = [filler-pod-7d3fef57-434f-44a9-8335-04a6d42ba0f2.160e176de30fc3f0], Reason = [Scheduled], Message = [Successfully assigned sched-pred-9113/filler-pod-7d3fef57-434f-44a9-8335-04a6d42ba0f2 to jerma-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-7d3fef57-434f-44a9-8335-04a6d42ba0f2.160e176edf7d7f00], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-7d3fef57-434f-44a9-8335-04a6d42ba0f2.160e176fbe8a9f6f], Reason = [Created], Message = [Created container filler-pod-7d3fef57-434f-44a9-8335-04a6d42ba0f2] STEP: Considering event: Type = [Normal], Name = [filler-pod-7d3fef57-434f-44a9-8335-04a6d42ba0f2.160e176fd538b608], Reason = [Started], Message = [Started container filler-pod-7d3fef57-434f-44a9-8335-04a6d42ba0f2] STEP: Considering event: Type = [Warning], Name = [additional-pod.160e177058f5f835], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node jerma-worker2 STEP: verifying the node doesn't have the label node STEP: removing the label node off the node jerma-worker STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:44:50.910: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-9113" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:19.008 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":278,"completed":189,"skipped":3145,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:44:50.916: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 11 21:44:52.355: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-0c107ded-8df8-4724-9b1c-e5adee1270fd" in namespace "security-context-test-2918" to be "success or failure" May 11 21:44:52.816: INFO: Pod "alpine-nnp-false-0c107ded-8df8-4724-9b1c-e5adee1270fd": Phase="Pending", Reason="", readiness=false. Elapsed: 461.187029ms May 11 21:44:54.819: INFO: Pod "alpine-nnp-false-0c107ded-8df8-4724-9b1c-e5adee1270fd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.463811201s May 11 21:44:57.462: INFO: Pod "alpine-nnp-false-0c107ded-8df8-4724-9b1c-e5adee1270fd": Phase="Pending", Reason="", readiness=false. Elapsed: 5.10694733s May 11 21:44:59.571: INFO: Pod "alpine-nnp-false-0c107ded-8df8-4724-9b1c-e5adee1270fd": Phase="Running", Reason="", readiness=true. Elapsed: 7.216398178s May 11 21:45:01.676: INFO: Pod "alpine-nnp-false-0c107ded-8df8-4724-9b1c-e5adee1270fd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.321540983s May 11 21:45:01.676: INFO: Pod "alpine-nnp-false-0c107ded-8df8-4724-9b1c-e5adee1270fd" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:45:01.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-2918" for this suite. • [SLOW TEST:10.832 seconds] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when creating containers with AllowPrivilegeEscalation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:289 should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":190,"skipped":3153,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:45:01.747: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod May 11 21:45:02.147: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:45:21.683: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-5391" for this suite. • [SLOW TEST:20.829 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":278,"completed":191,"skipped":3162,"failed":0} [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:45:22.577: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-9f41860a-816e-40cd-8823-edaee84f4652 STEP: Creating a pod to test consume secrets May 11 21:45:23.664: INFO: Waiting up to 5m0s for pod "pod-secrets-10e53f43-6ec8-4202-ac7e-0d91f6486c46" in namespace "secrets-4307" to be "success or failure" May 11 21:45:23.875: INFO: Pod "pod-secrets-10e53f43-6ec8-4202-ac7e-0d91f6486c46": Phase="Pending", Reason="", readiness=false. Elapsed: 210.570056ms May 11 21:45:25.941: INFO: Pod "pod-secrets-10e53f43-6ec8-4202-ac7e-0d91f6486c46": Phase="Pending", Reason="", readiness=false. Elapsed: 2.276904878s May 11 21:45:28.217: INFO: Pod "pod-secrets-10e53f43-6ec8-4202-ac7e-0d91f6486c46": Phase="Pending", Reason="", readiness=false. Elapsed: 4.552310951s May 11 21:45:30.221: INFO: Pod "pod-secrets-10e53f43-6ec8-4202-ac7e-0d91f6486c46": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.556983043s STEP: Saw pod success May 11 21:45:30.221: INFO: Pod "pod-secrets-10e53f43-6ec8-4202-ac7e-0d91f6486c46" satisfied condition "success or failure" May 11 21:45:30.224: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-10e53f43-6ec8-4202-ac7e-0d91f6486c46 container secret-volume-test: STEP: delete the pod May 11 21:45:30.361: INFO: Waiting for pod pod-secrets-10e53f43-6ec8-4202-ac7e-0d91f6486c46 to disappear May 11 21:45:30.409: INFO: Pod pod-secrets-10e53f43-6ec8-4202-ac7e-0d91f6486c46 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:45:30.409: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4307" for this suite. • [SLOW TEST:8.330 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":192,"skipped":3162,"failed":0} SSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:45:30.907: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:45:53.893: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-5340" for this suite. STEP: Destroying namespace "nsdeletetest-1860" for this suite. May 11 21:45:54.181: INFO: Namespace nsdeletetest-1860 was already deleted STEP: Destroying namespace "nsdeletetest-4259" for this suite. • [SLOW TEST:23.278 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":278,"completed":193,"skipped":3167,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:45:54.186: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 11 21:45:54.590: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. May 11 21:45:54.610: INFO: Number of nodes with available pods: 0 May 11 21:45:54.610: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. May 11 21:45:54.659: INFO: Number of nodes with available pods: 0 May 11 21:45:54.660: INFO: Node jerma-worker is running more than one daemon pod May 11 21:45:55.663: INFO: Number of nodes with available pods: 0 May 11 21:45:55.663: INFO: Node jerma-worker is running more than one daemon pod May 11 21:45:56.719: INFO: Number of nodes with available pods: 0 May 11 21:45:56.719: INFO: Node jerma-worker is running more than one daemon pod May 11 21:45:57.782: INFO: Number of nodes with available pods: 0 May 11 21:45:57.782: INFO: Node jerma-worker is running more than one daemon pod May 11 21:45:58.671: INFO: Number of nodes with available pods: 0 May 11 21:45:58.671: INFO: Node jerma-worker is running more than one daemon pod May 11 21:45:59.666: INFO: Number of nodes with available pods: 1 May 11 21:45:59.666: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled May 11 21:45:59.710: INFO: Number of nodes with available pods: 1 May 11 21:45:59.710: INFO: Number of running nodes: 0, number of available pods: 1 May 11 21:46:00.717: INFO: Number of nodes with available pods: 0 May 11 21:46:00.717: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate May 11 21:46:00.740: INFO: Number of nodes with available pods: 0 May 11 21:46:00.740: INFO: Node jerma-worker is running more than one daemon pod May 11 21:46:01.798: INFO: Number of nodes with available pods: 0 May 11 21:46:01.798: INFO: Node jerma-worker is running more than one daemon pod May 11 21:46:02.953: INFO: Number of nodes with available pods: 0 May 11 21:46:02.953: INFO: Node jerma-worker is running more than one daemon pod May 11 21:46:03.779: INFO: Number of nodes with available pods: 0 May 11 21:46:03.779: INFO: Node jerma-worker is running more than one daemon pod May 11 21:46:05.312: INFO: Number of nodes with available pods: 0 May 11 21:46:05.312: INFO: Node jerma-worker is running more than one daemon pod May 11 21:46:05.743: INFO: Number of nodes with available pods: 0 May 11 21:46:05.743: INFO: Node jerma-worker is running more than one daemon pod May 11 21:46:06.744: INFO: Number of nodes with available pods: 0 May 11 21:46:06.744: INFO: Node jerma-worker is running more than one daemon pod May 11 21:46:07.743: INFO: Number of nodes with available pods: 0 May 11 21:46:07.743: INFO: Node jerma-worker is running more than one daemon pod May 11 21:46:08.744: INFO: Number of nodes with available pods: 0 May 11 21:46:08.744: INFO: Node jerma-worker is running more than one daemon pod May 11 21:46:09.744: INFO: Number of nodes with available pods: 0 May 11 21:46:09.744: INFO: Node jerma-worker is running more than one daemon pod May 11 21:46:10.742: INFO: Number of nodes with available pods: 0 May 11 21:46:10.742: INFO: Node jerma-worker is running more than one daemon pod May 11 21:46:11.772: INFO: Number of nodes with available pods: 0 May 11 21:46:11.772: INFO: Node jerma-worker is running more than one daemon pod May 11 21:46:12.766: INFO: Number of nodes with available pods: 1 May 11 21:46:12.766: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6033, will wait for the garbage collector to delete the pods May 11 21:46:12.865: INFO: Deleting DaemonSet.extensions daemon-set took: 5.087372ms May 11 21:46:13.165: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.161799ms May 11 21:46:17.869: INFO: Number of nodes with available pods: 0 May 11 21:46:17.869: INFO: Number of running nodes: 0, number of available pods: 0 May 11 21:46:17.872: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6033/daemonsets","resourceVersion":"15364746"},"items":null} May 11 21:46:17.906: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6033/pods","resourceVersion":"15364746"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:46:17.964: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-6033" for this suite. • [SLOW TEST:23.785 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":278,"completed":194,"skipped":3205,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:46:17.971: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-6394.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-6394.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6394.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-6394.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-6394.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6394.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 11 21:46:29.400: INFO: DNS probes using dns-6394/dns-test-1d7962ad-12c5-4dae-8ae6-8ba0284597b2 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:46:29.470: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6394" for this suite. • [SLOW TEST:12.247 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":278,"completed":195,"skipped":3219,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:46:30.218: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:178 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:46:30.855: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8185" for this suite. •{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":278,"completed":196,"skipped":3228,"failed":0} SSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:46:31.576: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 11 21:46:33.237: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a37f05e0-672a-4f3e-a3ef-06718fe71b02" in namespace "downward-api-8791" to be "success or failure" May 11 21:46:33.421: INFO: Pod "downwardapi-volume-a37f05e0-672a-4f3e-a3ef-06718fe71b02": Phase="Pending", Reason="", readiness=false. Elapsed: 184.441163ms May 11 21:46:35.571: INFO: Pod "downwardapi-volume-a37f05e0-672a-4f3e-a3ef-06718fe71b02": Phase="Pending", Reason="", readiness=false. Elapsed: 2.333558216s May 11 21:46:37.654: INFO: Pod "downwardapi-volume-a37f05e0-672a-4f3e-a3ef-06718fe71b02": Phase="Pending", Reason="", readiness=false. Elapsed: 4.417207105s May 11 21:46:39.827: INFO: Pod "downwardapi-volume-a37f05e0-672a-4f3e-a3ef-06718fe71b02": Phase="Pending", Reason="", readiness=false. Elapsed: 6.58957879s May 11 21:46:42.666: INFO: Pod "downwardapi-volume-a37f05e0-672a-4f3e-a3ef-06718fe71b02": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.428898958s STEP: Saw pod success May 11 21:46:42.666: INFO: Pod "downwardapi-volume-a37f05e0-672a-4f3e-a3ef-06718fe71b02" satisfied condition "success or failure" May 11 21:46:42.669: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-a37f05e0-672a-4f3e-a3ef-06718fe71b02 container client-container: STEP: delete the pod May 11 21:46:42.908: INFO: Waiting for pod downwardapi-volume-a37f05e0-672a-4f3e-a3ef-06718fe71b02 to disappear May 11 21:46:43.219: INFO: Pod downwardapi-volume-a37f05e0-672a-4f3e-a3ef-06718fe71b02 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:46:43.219: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8791" for this suite. • [SLOW TEST:11.846 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":197,"skipped":3234,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:46:43.423: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 11 21:46:43.962: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0804e8be-3655-444f-9c1f-2fc96d5d4cbf" in namespace "downward-api-829" to be "success or failure" May 11 21:46:44.048: INFO: Pod "downwardapi-volume-0804e8be-3655-444f-9c1f-2fc96d5d4cbf": Phase="Pending", Reason="", readiness=false. Elapsed: 86.229048ms May 11 21:46:46.184: INFO: Pod "downwardapi-volume-0804e8be-3655-444f-9c1f-2fc96d5d4cbf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.222221409s May 11 21:46:48.188: INFO: Pod "downwardapi-volume-0804e8be-3655-444f-9c1f-2fc96d5d4cbf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.226183752s May 11 21:46:50.247: INFO: Pod "downwardapi-volume-0804e8be-3655-444f-9c1f-2fc96d5d4cbf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.285499802s STEP: Saw pod success May 11 21:46:50.248: INFO: Pod "downwardapi-volume-0804e8be-3655-444f-9c1f-2fc96d5d4cbf" satisfied condition "success or failure" May 11 21:46:50.251: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-0804e8be-3655-444f-9c1f-2fc96d5d4cbf container client-container: STEP: delete the pod May 11 21:46:50.343: INFO: Waiting for pod downwardapi-volume-0804e8be-3655-444f-9c1f-2fc96d5d4cbf to disappear May 11 21:46:50.372: INFO: Pod downwardapi-volume-0804e8be-3655-444f-9c1f-2fc96d5d4cbf no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:46:50.372: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-829" for this suite. • [SLOW TEST:6.957 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":198,"skipped":3246,"failed":0} S ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:46:50.380: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod May 11 21:46:56.508: INFO: &Pod{ObjectMeta:{send-events-3c042e3d-aad8-4a4c-8f91-ef0976eb9170 events-9516 /api/v1/namespaces/events-9516/pods/send-events-3c042e3d-aad8-4a4c-8f91-ef0976eb9170 58eb1d98-c39f-47a2-a9d8-b9f511ed4b42 15364982 0 2020-05-11 21:46:50 +0000 UTC map[name:foo time:443347596] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-h5pqq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-h5pqq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-h5pqq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 21:46:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 21:46:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 21:46:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 21:46:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.230,StartTime:2020-05-11 21:46:50 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-11 21:46:53 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://4c933b28c980a503a9cfad28a995b197eab0f7af9fc42eb8638ebf1ed5114f03,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.230,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod May 11 21:46:58.512: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod May 11 21:47:00.516: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:47:00.521: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-9516" for this suite. • [SLOW TEST:10.168 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":278,"completed":199,"skipped":3247,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:47:00.549: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-fw9kk in namespace proxy-2497 I0511 21:47:01.523604 6 runners.go:189] Created replication controller with name: proxy-service-fw9kk, namespace: proxy-2497, replica count: 1 I0511 21:47:02.574168 6 runners.go:189] proxy-service-fw9kk Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0511 21:47:03.574369 6 runners.go:189] proxy-service-fw9kk Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0511 21:47:04.574543 6 runners.go:189] proxy-service-fw9kk Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0511 21:47:05.574723 6 runners.go:189] proxy-service-fw9kk Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0511 21:47:06.574888 6 runners.go:189] proxy-service-fw9kk Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0511 21:47:07.575098 6 runners.go:189] proxy-service-fw9kk Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0511 21:47:08.575260 6 runners.go:189] proxy-service-fw9kk Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 11 21:47:08.578: INFO: setup took 7.93469986s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts May 11 21:47:08.586: INFO: (0) /api/v1/namespaces/proxy-2497/services/proxy-service-fw9kk:portname1/proxy/: foo (200; 8.697503ms) May 11 21:47:08.587: INFO: (0) /api/v1/namespaces/proxy-2497/pods/proxy-service-fw9kk-kdj9c:1080/proxy/: test<... (200; 8.866898ms) May 11 21:47:08.587: INFO: (0) /api/v1/namespaces/proxy-2497/pods/http:proxy-service-fw9kk-kdj9c:1080/proxy/: ... (200; 9.053468ms) May 11 21:47:08.588: INFO: (0) /api/v1/namespaces/proxy-2497/pods/proxy-service-fw9kk-kdj9c/proxy/: test (200; 10.134766ms) May 11 21:47:08.588: INFO: (0) /api/v1/namespaces/proxy-2497/pods/proxy-service-fw9kk-kdj9c:160/proxy/: foo (200; 10.17008ms) May 11 21:47:08.589: INFO: (0) /api/v1/namespaces/proxy-2497/services/proxy-service-fw9kk:portname2/proxy/: bar (200; 11.294424ms) May 11 21:47:08.589: INFO: (0) /api/v1/namespaces/proxy-2497/pods/http:proxy-service-fw9kk-kdj9c:160/proxy/: foo (200; 11.284596ms) May 11 21:47:08.589: INFO: (0) /api/v1/namespaces/proxy-2497/services/http:proxy-service-fw9kk:portname2/proxy/: bar (200; 11.426527ms) May 11 21:47:08.589: INFO: (0) /api/v1/namespaces/proxy-2497/services/http:proxy-service-fw9kk:portname1/proxy/: foo (200; 11.286594ms) May 11 21:47:08.589: INFO: (0) /api/v1/namespaces/proxy-2497/pods/proxy-service-fw9kk-kdj9c:162/proxy/: bar (200; 11.516575ms) May 11 21:47:08.590: INFO: (0) /api/v1/namespaces/proxy-2497/pods/http:proxy-service-fw9kk-kdj9c:162/proxy/: bar (200; 11.880109ms) May 11 21:47:08.597: INFO: (0) /api/v1/namespaces/proxy-2497/pods/https:proxy-service-fw9kk-kdj9c:460/proxy/: tls baz (200; 19.287083ms) May 11 21:47:08.597: INFO: (0) /api/v1/namespaces/proxy-2497/pods/https:proxy-service-fw9kk-kdj9c:462/proxy/: tls qux (200; 19.321138ms) May 11 21:47:08.597: INFO: (0) /api/v1/namespaces/proxy-2497/services/https:proxy-service-fw9kk:tlsportname1/proxy/: tls baz (200; 19.246821ms) May 11 21:47:08.597: INFO: (0) /api/v1/namespaces/proxy-2497/services/https:proxy-service-fw9kk:tlsportname2/proxy/: tls qux (200; 19.173258ms) May 11 21:47:08.598: INFO: (0) /api/v1/namespaces/proxy-2497/pods/https:proxy-service-fw9kk-kdj9c:443/proxy/: test<... (200; 6.002224ms) May 11 21:47:08.604: INFO: (1) /api/v1/namespaces/proxy-2497/pods/proxy-service-fw9kk-kdj9c:160/proxy/: foo (200; 5.971473ms) May 11 21:47:08.604: INFO: (1) /api/v1/namespaces/proxy-2497/services/http:proxy-service-fw9kk:portname2/proxy/: bar (200; 6.008771ms) May 11 21:47:08.604: INFO: (1) /api/v1/namespaces/proxy-2497/pods/proxy-service-fw9kk-kdj9c:162/proxy/: bar (200; 6.209801ms) May 11 21:47:08.604: INFO: (1) /api/v1/namespaces/proxy-2497/services/proxy-service-fw9kk:portname1/proxy/: foo (200; 6.117636ms) May 11 21:47:08.604: INFO: (1) /api/v1/namespaces/proxy-2497/services/https:proxy-service-fw9kk:tlsportname2/proxy/: tls qux (200; 5.967148ms) May 11 21:47:08.604: INFO: (1) /api/v1/namespaces/proxy-2497/pods/proxy-service-fw9kk-kdj9c/proxy/: test (200; 6.123791ms) May 11 21:47:08.604: INFO: (1) /api/v1/namespaces/proxy-2497/pods/https:proxy-service-fw9kk-kdj9c:443/proxy/: ... (200; 6.428279ms) May 11 21:47:08.608: INFO: (2) /api/v1/namespaces/proxy-2497/pods/https:proxy-service-fw9kk-kdj9c:443/proxy/: test<... (200; 3.369214ms) May 11 21:47:08.608: INFO: (2) /api/v1/namespaces/proxy-2497/pods/http:proxy-service-fw9kk-kdj9c:160/proxy/: foo (200; 3.479737ms) May 11 21:47:08.608: INFO: (2) /api/v1/namespaces/proxy-2497/pods/proxy-service-fw9kk-kdj9c:162/proxy/: bar (200; 3.560001ms) May 11 21:47:08.608: INFO: (2) /api/v1/namespaces/proxy-2497/pods/proxy-service-fw9kk-kdj9c/proxy/: test (200; 3.556112ms) May 11 21:47:08.609: INFO: (2) /api/v1/namespaces/proxy-2497/pods/http:proxy-service-fw9kk-kdj9c:1080/proxy/: ... (200; 3.873073ms) May 11 21:47:08.609: INFO: (2) /api/v1/namespaces/proxy-2497/pods/https:proxy-service-fw9kk-kdj9c:460/proxy/: tls baz (200; 4.067645ms) May 11 21:47:08.610: INFO: (2) /api/v1/namespaces/proxy-2497/services/https:proxy-service-fw9kk:tlsportname1/proxy/: tls baz (200; 5.396382ms) May 11 21:47:08.610: INFO: (2) /api/v1/namespaces/proxy-2497/services/proxy-service-fw9kk:portname1/proxy/: foo (200; 5.566511ms) May 11 21:47:08.610: INFO: (2) /api/v1/namespaces/proxy-2497/services/http:proxy-service-fw9kk:portname2/proxy/: bar (200; 5.499425ms) May 11 21:47:08.610: INFO: (2) /api/v1/namespaces/proxy-2497/services/proxy-service-fw9kk:portname2/proxy/: bar (200; 5.459215ms) May 11 21:47:08.610: INFO: (2) /api/v1/namespaces/proxy-2497/services/http:proxy-service-fw9kk:portname1/proxy/: foo (200; 5.611165ms) May 11 21:47:08.611: INFO: (2) /api/v1/namespaces/proxy-2497/services/https:proxy-service-fw9kk:tlsportname2/proxy/: tls qux (200; 5.862815ms) May 11 21:47:08.613: INFO: (3) /api/v1/namespaces/proxy-2497/pods/https:proxy-service-fw9kk-kdj9c:462/proxy/: tls qux (200; 2.608357ms) May 11 21:47:08.614: INFO: (3) /api/v1/namespaces/proxy-2497/pods/http:proxy-service-fw9kk-kdj9c:162/proxy/: bar (200; 2.992673ms) May 11 21:47:08.614: INFO: (3) /api/v1/namespaces/proxy-2497/pods/proxy-service-fw9kk-kdj9c:1080/proxy/: test<... (200; 3.165107ms) May 11 21:47:08.616: INFO: (3) /api/v1/namespaces/proxy-2497/pods/proxy-service-fw9kk-kdj9c:162/proxy/: bar (200; 5.079586ms) May 11 21:47:08.616: INFO: (3) /api/v1/namespaces/proxy-2497/pods/http:proxy-service-fw9kk-kdj9c:160/proxy/: foo (200; 5.055371ms) May 11 21:47:08.616: INFO: (3) /api/v1/namespaces/proxy-2497/pods/https:proxy-service-fw9kk-kdj9c:443/proxy/: ... (200; 5.141298ms) May 11 21:47:08.616: INFO: (3) /api/v1/namespaces/proxy-2497/pods/proxy-service-fw9kk-kdj9c:160/proxy/: foo (200; 5.140982ms) May 11 21:47:08.616: INFO: (3) /api/v1/namespaces/proxy-2497/services/proxy-service-fw9kk:portname1/proxy/: foo (200; 5.278993ms) May 11 21:47:08.616: INFO: (3) /api/v1/namespaces/proxy-2497/pods/https:proxy-service-fw9kk-kdj9c:460/proxy/: tls baz (200; 5.245914ms) May 11 21:47:08.617: INFO: (3) /api/v1/namespaces/proxy-2497/services/https:proxy-service-fw9kk:tlsportname1/proxy/: tls baz (200; 6.236629ms) May 11 21:47:08.617: INFO: (3) /api/v1/namespaces/proxy-2497/services/proxy-service-fw9kk:portname2/proxy/: bar (200; 6.344407ms) May 11 21:47:08.617: INFO: (3) /api/v1/namespaces/proxy-2497/services/http:proxy-service-fw9kk:portname2/proxy/: bar (200; 6.263677ms) May 11 21:47:08.617: INFO: (3) /api/v1/namespaces/proxy-2497/pods/proxy-service-fw9kk-kdj9c/proxy/: test (200; 6.315225ms) May 11 21:47:08.618: INFO: (3) /api/v1/namespaces/proxy-2497/services/http:proxy-service-fw9kk:portname1/proxy/: foo (200; 6.865929ms) May 11 21:47:08.622: INFO: (3) /api/v1/namespaces/proxy-2497/services/https:proxy-service-fw9kk:tlsportname2/proxy/: tls qux (200; 11.177691ms) May 11 21:47:08.625: INFO: (4) /api/v1/namespaces/proxy-2497/pods/proxy-service-fw9kk-kdj9c:160/proxy/: foo (200; 3.006826ms) May 11 21:47:08.626: INFO: (4) /api/v1/namespaces/proxy-2497/pods/https:proxy-service-fw9kk-kdj9c:443/proxy/: ... (200; 3.62809ms) May 11 21:47:08.627: INFO: (4) /api/v1/namespaces/proxy-2497/services/https:proxy-service-fw9kk:tlsportname1/proxy/: tls baz (200; 4.467948ms) May 11 21:47:08.627: INFO: (4) /api/v1/namespaces/proxy-2497/pods/https:proxy-service-fw9kk-kdj9c:460/proxy/: tls baz (200; 4.5802ms) May 11 21:47:08.627: INFO: (4) /api/v1/namespaces/proxy-2497/pods/http:proxy-service-fw9kk-kdj9c:162/proxy/: bar (200; 4.681211ms) May 11 21:47:08.627: INFO: (4) /api/v1/namespaces/proxy-2497/pods/http:proxy-service-fw9kk-kdj9c:160/proxy/: foo (200; 4.714174ms) May 11 21:47:08.627: INFO: (4) /api/v1/namespaces/proxy-2497/pods/proxy-service-fw9kk-kdj9c/proxy/: test (200; 4.712819ms) May 11 21:47:08.627: INFO: (4) /api/v1/namespaces/proxy-2497/services/proxy-service-fw9kk:portname2/proxy/: bar (200; 5.03176ms) May 11 21:47:08.627: INFO: (4) /api/v1/namespaces/proxy-2497/services/proxy-service-fw9kk:portname1/proxy/: foo (200; 5.164925ms) May 11 21:47:08.627: INFO: (4) /api/v1/namespaces/proxy-2497/pods/proxy-service-fw9kk-kdj9c:1080/proxy/: test<... (200; 5.352142ms) May 11 21:47:08.627: INFO: (4) /api/v1/namespaces/proxy-2497/services/https:proxy-service-fw9kk:tlsportname2/proxy/: tls qux (200; 5.297541ms) May 11 21:47:08.627: INFO: (4) /api/v1/namespaces/proxy-2497/pods/proxy-service-fw9kk-kdj9c:162/proxy/: bar (200; 5.304234ms) May 11 21:47:08.627: INFO: (4) /api/v1/namespaces/proxy-2497/services/http:proxy-service-fw9kk:portname1/proxy/: foo (200; 5.377556ms) May 11 21:47:08.627: INFO: (4) /api/v1/namespaces/proxy-2497/pods/https:proxy-service-fw9kk-kdj9c:462/proxy/: tls qux (200; 5.306024ms) May 11 21:47:08.628: INFO: (4) /api/v1/namespaces/proxy-2497/services/http:proxy-service-fw9kk:portname2/proxy/: bar (200; 5.480304ms) May 11 21:47:08.632: INFO: (5) /api/v1/namespaces/proxy-2497/pods/http:proxy-service-fw9kk-kdj9c:160/proxy/: foo (200; 4.261637ms) May 11 21:47:08.632: INFO: (5) /api/v1/namespaces/proxy-2497/pods/proxy-service-fw9kk-kdj9c:162/proxy/: bar (200; 4.336535ms) May 11 21:47:08.632: INFO: (5) /api/v1/namespaces/proxy-2497/pods/proxy-service-fw9kk-kdj9c:1080/proxy/: test<... (200; 4.226623ms) May 11 21:47:08.632: INFO: (5) /api/v1/namespaces/proxy-2497/pods/proxy-service-fw9kk-kdj9c:160/proxy/: foo (200; 4.240028ms) May 11 21:47:08.632: INFO: (5) /api/v1/namespaces/proxy-2497/services/https:proxy-service-fw9kk:tlsportname1/proxy/: tls baz (200; 4.327021ms) May 11 21:47:08.632: INFO: (5) /api/v1/namespaces/proxy-2497/pods/http:proxy-service-fw9kk-kdj9c:1080/proxy/: ... (200; 4.258749ms) May 11 21:47:08.632: INFO: (5) /api/v1/namespaces/proxy-2497/pods/https:proxy-service-fw9kk-kdj9c:462/proxy/: tls qux (200; 4.350317ms) May 11 21:47:08.632: INFO: (5) /api/v1/namespaces/proxy-2497/pods/https:proxy-service-fw9kk-kdj9c:460/proxy/: tls baz (200; 4.323276ms) May 11 21:47:08.632: INFO: (5) /api/v1/namespaces/proxy-2497/pods/proxy-service-fw9kk-kdj9c/proxy/: test (200; 4.431224ms) May 11 21:47:08.632: INFO: (5) /api/v1/namespaces/proxy-2497/pods/http:proxy-service-fw9kk-kdj9c:162/proxy/: bar (200; 4.341613ms) May 11 21:47:08.632: INFO: (5) /api/v1/namespaces/proxy-2497/pods/https:proxy-service-fw9kk-kdj9c:443/proxy/: ... (200; 3.500765ms) May 11 21:47:08.638: INFO: (6) /api/v1/namespaces/proxy-2497/pods/https:proxy-service-fw9kk-kdj9c:462/proxy/: tls qux (200; 3.69293ms) May 11 21:47:08.638: INFO: (6) /api/v1/namespaces/proxy-2497/pods/proxy-service-fw9kk-kdj9c/proxy/: test (200; 3.919776ms) May 11 21:47:08.638: INFO: (6) /api/v1/namespaces/proxy-2497/pods/proxy-service-fw9kk-kdj9c:1080/proxy/: test<... (200; 3.900779ms) May 11 21:47:08.638: INFO: (6) /api/v1/namespaces/proxy-2497/pods/http:proxy-service-fw9kk-kdj9c:160/proxy/: foo (200; 3.886876ms) May 11 21:47:08.638: INFO: (6) /api/v1/namespaces/proxy-2497/services/https:proxy-service-fw9kk:tlsportname1/proxy/: tls baz (200; 3.954636ms) May 11 21:47:08.638: INFO: (6) /api/v1/namespaces/proxy-2497/pods/https:proxy-service-fw9kk-kdj9c:443/proxy/: test<... (200; 4.345118ms) May 11 21:47:08.646: INFO: (7) /api/v1/namespaces/proxy-2497/pods/http:proxy-service-fw9kk-kdj9c:1080/proxy/: ... (200; 4.425837ms) May 11 21:47:08.646: INFO: (7) /api/v1/namespaces/proxy-2497/pods/proxy-service-fw9kk-kdj9c:160/proxy/: foo (200; 4.733607ms) May 11 21:47:08.646: INFO: (7) /api/v1/namespaces/proxy-2497/pods/https:proxy-service-fw9kk-kdj9c:443/proxy/: test (200; 5.655379ms) May 11 21:47:08.647: INFO: (7) /api/v1/namespaces/proxy-2497/services/https:proxy-service-fw9kk:tlsportname1/proxy/: tls baz (200; 5.741089ms) May 11 21:47:08.647: INFO: (7) /api/v1/namespaces/proxy-2497/services/https:proxy-service-fw9kk:tlsportname2/proxy/: tls qux (200; 5.715235ms) May 11 21:47:08.650: INFO: (8) /api/v1/namespaces/proxy-2497/pods/http:proxy-service-fw9kk-kdj9c:160/proxy/: foo (200; 2.838369ms) May 11 21:47:08.650: INFO: (8) /api/v1/namespaces/proxy-2497/pods/proxy-service-fw9kk-kdj9c:160/proxy/: foo (200; 2.88362ms) May 11 21:47:08.651: INFO: (8) /api/v1/namespaces/proxy-2497/pods/proxy-service-fw9kk-kdj9c/proxy/: test (200; 3.301429ms) May 11 21:47:08.651: INFO: (8) /api/v1/namespaces/proxy-2497/pods/proxy-service-fw9kk-kdj9c:162/proxy/: bar (200; 3.539453ms) May 11 21:47:08.651: INFO: (8) /api/v1/namespaces/proxy-2497/pods/http:proxy-service-fw9kk-kdj9c:162/proxy/: bar (200; 3.631367ms) May 11 21:47:08.651: INFO: (8) /api/v1/namespaces/proxy-2497/pods/https:proxy-service-fw9kk-kdj9c:460/proxy/: tls baz (200; 3.679645ms) May 11 21:47:08.651: INFO: (8) /api/v1/namespaces/proxy-2497/pods/http:proxy-service-fw9kk-kdj9c:1080/proxy/: ... (200; 3.901404ms) May 11 21:47:08.652: INFO: (8) /api/v1/namespaces/proxy-2497/services/https:proxy-service-fw9kk:tlsportname2/proxy/: tls qux (200; 4.078332ms) May 11 21:47:08.652: INFO: (8) /api/v1/namespaces/proxy-2497/pods/https:proxy-service-fw9kk-kdj9c:462/proxy/: tls qux (200; 4.515815ms) May 11 21:47:08.652: INFO: (8) /api/v1/namespaces/proxy-2497/services/https:proxy-service-fw9kk:tlsportname1/proxy/: tls baz (200; 4.656321ms) May 11 21:47:08.652: INFO: (8) /api/v1/namespaces/proxy-2497/services/http:proxy-service-fw9kk:portname2/proxy/: bar (200; 4.618874ms) May 11 21:47:08.652: INFO: (8) /api/v1/namespaces/proxy-2497/pods/proxy-service-fw9kk-kdj9c:1080/proxy/: test<... (200; 4.812615ms) May 11 21:47:08.652: INFO: (8) /api/v1/namespaces/proxy-2497/services/proxy-service-fw9kk:portname2/proxy/: bar (200; 4.774794ms) May 11 21:47:08.652: INFO: (8) /api/v1/namespaces/proxy-2497/services/http:proxy-service-fw9kk:portname1/proxy/: foo (200; 4.780801ms) May 11 21:47:08.652: INFO: (8) /api/v1/namespaces/proxy-2497/services/proxy-service-fw9kk:portname1/proxy/: foo (200; 4.860197ms) May 11 21:47:08.653: INFO: (8) /api/v1/namespaces/proxy-2497/pods/https:proxy-service-fw9kk-kdj9c:443/proxy/: ... (200; 3.676286ms) May 11 21:47:08.657: INFO: (9) /api/v1/namespaces/proxy-2497/pods/https:proxy-service-fw9kk-kdj9c:462/proxy/: tls qux (200; 3.628055ms) May 11 21:47:08.657: INFO: (9) /api/v1/namespaces/proxy-2497/pods/proxy-service-fw9kk-kdj9c/proxy/: test (200; 3.749551ms) May 11 21:47:08.657: INFO: (9) /api/v1/namespaces/proxy-2497/pods/http:proxy-service-fw9kk-kdj9c:162/proxy/: bar (200; 3.818965ms) May 11 21:47:08.657: INFO: (9) /api/v1/namespaces/proxy-2497/pods/proxy-service-fw9kk-kdj9c:1080/proxy/: test<... (200; 3.783219ms) May 11 21:47:08.658: INFO: (9) /api/v1/namespaces/proxy-2497/pods/proxy-service-fw9kk-kdj9c:160/proxy/: foo (200; 4.926777ms) May 11 21:47:08.659: INFO: (9) /api/v1/namespaces/proxy-2497/pods/https:proxy-service-fw9kk-kdj9c:443/proxy/: test<... (200; 4.680546ms) May 11 21:47:08.669: INFO: (10) /api/v1/namespaces/proxy-2497/pods/https:proxy-service-fw9kk-kdj9c:462/proxy/: tls qux (200; 6.704377ms) May 11 21:47:08.669: INFO: (10) /api/v1/namespaces/proxy-2497/pods/proxy-service-fw9kk-kdj9c:162/proxy/: bar (200; 6.721541ms) May 11 21:47:08.669: INFO: (10) /api/v1/namespaces/proxy-2497/services/http:proxy-service-fw9kk:portname1/proxy/: foo (200; 6.753895ms) May 11 21:47:08.669: INFO: (10) /api/v1/namespaces/proxy-2497/pods/https:proxy-service-fw9kk-kdj9c:460/proxy/: tls baz (200; 6.745304ms) May 11 21:47:08.669: INFO: (10) /api/v1/namespaces/proxy-2497/services/https:proxy-service-fw9kk:tlsportname1/proxy/: tls baz (200; 6.749814ms) May 11 21:47:08.669: INFO: (10) /api/v1/namespaces/proxy-2497/pods/https:proxy-service-fw9kk-kdj9c:443/proxy/: ... (200; 6.835371ms) May 11 21:47:08.669: INFO: (10) /api/v1/namespaces/proxy-2497/services/proxy-service-fw9kk:portname1/proxy/: foo (200; 6.79958ms) May 11 21:47:08.669: INFO: (10) /api/v1/namespaces/proxy-2497/pods/proxy-service-fw9kk-kdj9c/proxy/: test (200; 6.900682ms) May 11 21:47:08.669: INFO: (10) /api/v1/namespaces/proxy-2497/services/http:proxy-service-fw9kk:portname2/proxy/: bar (200; 6.950005ms) May 11 21:47:08.669: INFO: (10) /api/v1/namespaces/proxy-2497/pods/proxy-service-fw9kk-kdj9c:160/proxy/: foo (200; 6.908185ms) May 11 21:47:08.669: INFO: (10) /api/v1/namespaces/proxy-2497/services/proxy-service-fw9kk:portname2/proxy/: bar (200; 6.879284ms) May 11 21:47:08.673: INFO: (11) /api/v1/namespaces/proxy-2497/pods/https:proxy-service-fw9kk-kdj9c:443/proxy/: ... (200; 4.142022ms) May 11 21:47:08.673: INFO: (11) /api/v1/namespaces/proxy-2497/pods/http:proxy-service-fw9kk-kdj9c:162/proxy/: bar (200; 4.183376ms) May 11 21:47:08.673: INFO: (11) /api/v1/namespaces/proxy-2497/pods/proxy-service-fw9kk-kdj9c:160/proxy/: foo (200; 4.302727ms) May 11 21:47:08.674: INFO: (11) /api/v1/namespaces/proxy-2497/services/https:proxy-service-fw9kk:tlsportname1/proxy/: tls baz (200; 5.083856ms) May 11 21:47:08.674: INFO: (11) /api/v1/namespaces/proxy-2497/pods/proxy-service-fw9kk-kdj9c:1080/proxy/: test<... (200; 4.959228ms) May 11 21:47:08.674: INFO: (11) /api/v1/namespaces/proxy-2497/pods/proxy-service-fw9kk-kdj9c/proxy/: test (200; 5.1312ms) May 11 21:47:08.675: INFO: (11) /api/v1/namespaces/proxy-2497/services/http:proxy-service-fw9kk:portname2/proxy/: bar (200; 5.412478ms) May 11 21:47:08.675: INFO: (11) /api/v1/namespaces/proxy-2497/pods/http:proxy-service-fw9kk-kdj9c:160/proxy/: foo (200; 5.890136ms) May 11 21:47:08.675: INFO: (11) /api/v1/namespaces/proxy-2497/pods/https:proxy-service-fw9kk-kdj9c:462/proxy/: tls qux (200; 5.87796ms) May 11 21:47:08.675: INFO: (11) /api/v1/namespaces/proxy-2497/pods/proxy-service-fw9kk-kdj9c:162/proxy/: bar (200; 5.909662ms) May 11 21:47:08.675: INFO: (11) /api/v1/namespaces/proxy-2497/services/proxy-service-fw9kk:portname1/proxy/: foo (200; 6.267531ms) May 11 21:47:08.676: INFO: (11) /api/v1/namespaces/proxy-2497/services/proxy-service-fw9kk:portname2/proxy/: bar (200; 6.275955ms) May 11 21:47:08.676: INFO: (11) /api/v1/namespaces/proxy-2497/services/https:proxy-service-fw9kk:tlsportname2/proxy/: tls qux (200; 6.463098ms) May 11 21:47:08.676: INFO: (11) /api/v1/namespaces/proxy-2497/services/http:proxy-service-fw9kk:portname1/proxy/: foo (200; 6.325994ms) May 11 21:47:08.678: INFO: (12) /api/v1/namespaces/proxy-2497/pods/proxy-service-fw9kk-kdj9c:162/proxy/: bar (200; 2.44396ms) May 11 21:47:08.678: INFO: (12) /api/v1/namespaces/proxy-2497/pods/proxy-service-fw9kk-kdj9c:160/proxy/: foo (200; 2.753937ms) May 11 21:47:08.680: INFO: (12) /api/v1/namespaces/proxy-2497/pods/https:proxy-service-fw9kk-kdj9c:462/proxy/: tls qux (200; 4.237954ms) May 11 21:47:08.680: INFO: (12) /api/v1/namespaces/proxy-2497/pods/proxy-service-fw9kk-kdj9c/proxy/: test (200; 4.522505ms) May 11 21:47:08.680: INFO: (12) /api/v1/namespaces/proxy-2497/pods/http:proxy-service-fw9kk-kdj9c:160/proxy/: foo (200; 4.641173ms) May 11 21:47:08.680: INFO: (12) /api/v1/namespaces/proxy-2497/pods/proxy-service-fw9kk-kdj9c:1080/proxy/: test<... (200; 4.757114ms) May 11 21:47:08.680: INFO: (12) /api/v1/namespaces/proxy-2497/pods/http:proxy-service-fw9kk-kdj9c:162/proxy/: bar (200; 4.778694ms) May 11 21:47:08.681: INFO: (12) /api/v1/namespaces/proxy-2497/pods/https:proxy-service-fw9kk-kdj9c:443/proxy/: ... (200; 5.242167ms) May 11 21:47:08.681: INFO: (12) /api/v1/namespaces/proxy-2497/pods/https:proxy-service-fw9kk-kdj9c:460/proxy/: tls baz (200; 5.284252ms) May 11 21:47:08.682: INFO: (12) /api/v1/namespaces/proxy-2497/services/https:proxy-service-fw9kk:tlsportname1/proxy/: tls baz (200; 6.502263ms) May 11 21:47:08.682: INFO: (12) /api/v1/namespaces/proxy-2497/services/proxy-service-fw9kk:portname1/proxy/: foo (200; 6.642334ms) May 11 21:47:08.682: INFO: (12) /api/v1/namespaces/proxy-2497/services/http:proxy-service-fw9kk:portname2/proxy/: bar (200; 6.701946ms) May 11 21:47:08.682: INFO: (12) /api/v1/namespaces/proxy-2497/services/http:proxy-service-fw9kk:portname1/proxy/: foo (200; 6.674225ms) May 11 21:47:08.682: INFO: (12) /api/v1/namespaces/proxy-2497/services/proxy-service-fw9kk:portname2/proxy/: bar (200; 6.672673ms) May 11 21:47:08.682: INFO: (12) /api/v1/namespaces/proxy-2497/services/https:proxy-service-fw9kk:tlsportname2/proxy/: tls qux (200; 6.656457ms) May 11 21:47:08.684: INFO: (13) /api/v1/namespaces/proxy-2497/pods/http:proxy-service-fw9kk-kdj9c:162/proxy/: bar (200; 1.804515ms) May 11 21:47:08.684: INFO: (13) /api/v1/namespaces/proxy-2497/pods/proxy-service-fw9kk-kdj9c/proxy/: test (200; 1.964387ms) May 11 21:47:08.689: INFO: (13) /api/v1/namespaces/proxy-2497/services/proxy-service-fw9kk:portname1/proxy/: foo (200; 6.129844ms) May 11 21:47:08.689: INFO: (13) /api/v1/namespaces/proxy-2497/pods/https:proxy-service-fw9kk-kdj9c:443/proxy/: test<... (200; 6.385623ms) May 11 21:47:08.689: INFO: (13) /api/v1/namespaces/proxy-2497/pods/proxy-service-fw9kk-kdj9c:162/proxy/: bar (200; 6.47759ms) May 11 21:47:08.689: INFO: (13) /api/v1/namespaces/proxy-2497/services/proxy-service-fw9kk:portname2/proxy/: bar (200; 6.449726ms) May 11 21:47:08.689: INFO: (13) /api/v1/namespaces/proxy-2497/services/https:proxy-service-fw9kk:tlsportname1/proxy/: tls baz (200; 6.595809ms) May 11 21:47:08.689: INFO: (13) /api/v1/namespaces/proxy-2497/services/http:proxy-service-fw9kk:portname2/proxy/: bar (200; 6.614033ms) May 11 21:47:08.689: INFO: (13) /api/v1/namespaces/proxy-2497/pods/proxy-service-fw9kk-kdj9c:160/proxy/: foo (200; 6.796337ms) May 11 21:47:08.689: INFO: (13) /api/v1/namespaces/proxy-2497/pods/https:proxy-service-fw9kk-kdj9c:462/proxy/: tls qux (200; 6.887764ms) May 11 21:47:08.689: INFO: (13) /api/v1/namespaces/proxy-2497/pods/http:proxy-service-fw9kk-kdj9c:1080/proxy/: ... (200; 6.819275ms) May 11 21:47:08.689: INFO: (13) /api/v1/namespaces/proxy-2497/pods/http:proxy-service-fw9kk-kdj9c:160/proxy/: foo (200; 6.854053ms) May 11 21:47:08.689: INFO: (13) /api/v1/namespaces/proxy-2497/services/https:proxy-service-fw9kk:tlsportname2/proxy/: tls qux (200; 6.881203ms) May 11 21:47:08.689: INFO: (13) /api/v1/namespaces/proxy-2497/pods/https:proxy-service-fw9kk-kdj9c:460/proxy/: tls baz (200; 6.946602ms) May 11 21:47:08.689: INFO: (13) /api/v1/namespaces/proxy-2497/services/http:proxy-service-fw9kk:portname1/proxy/: foo (200; 7.036078ms) May 11 21:47:08.693: INFO: (14) /api/v1/namespaces/proxy-2497/pods/proxy-service-fw9kk-kdj9c:160/proxy/: foo (200; 3.615134ms) May 11 21:47:08.693: INFO: (14) /api/v1/namespaces/proxy-2497/pods/http:proxy-service-fw9kk-kdj9c:160/proxy/: foo (200; 3.722493ms) May 11 21:47:08.694: INFO: (14) /api/v1/namespaces/proxy-2497/services/http:proxy-service-fw9kk:portname2/proxy/: bar (200; 4.543245ms) May 11 21:47:08.694: INFO: (14) /api/v1/namespaces/proxy-2497/pods/http:proxy-service-fw9kk-kdj9c:1080/proxy/: ... (200; 4.521541ms) May 11 21:47:08.694: INFO: (14) /api/v1/namespaces/proxy-2497/pods/proxy-service-fw9kk-kdj9c/proxy/: test (200; 4.546702ms) May 11 21:47:08.694: INFO: (14) /api/v1/namespaces/proxy-2497/pods/https:proxy-service-fw9kk-kdj9c:443/proxy/: test<... (200; 4.832894ms) May 11 21:47:08.694: INFO: (14) /api/v1/namespaces/proxy-2497/services/proxy-service-fw9kk:portname2/proxy/: bar (200; 4.809694ms) May 11 21:47:08.694: INFO: (14) /api/v1/namespaces/proxy-2497/services/http:proxy-service-fw9kk:portname1/proxy/: foo (200; 4.804673ms) May 11 21:47:08.694: INFO: (14) /api/v1/namespaces/proxy-2497/pods/http:proxy-service-fw9kk-kdj9c:162/proxy/: bar (200; 4.795943ms) May 11 21:47:08.694: INFO: (14) /api/v1/namespaces/proxy-2497/services/https:proxy-service-fw9kk:tlsportname2/proxy/: tls qux (200; 4.996575ms) May 11 21:47:08.695: INFO: (14) /api/v1/namespaces/proxy-2497/pods/https:proxy-service-fw9kk-kdj9c:462/proxy/: tls qux (200; 5.07175ms) May 11 21:47:08.695: INFO: (14) /api/v1/namespaces/proxy-2497/services/proxy-service-fw9kk:portname1/proxy/: foo (200; 5.100299ms) May 11 21:47:08.698: INFO: (15) /api/v1/namespaces/proxy-2497/pods/http:proxy-service-fw9kk-kdj9c:162/proxy/: bar (200; 3.299581ms) May 11 21:47:08.698: INFO: (15) /api/v1/namespaces/proxy-2497/pods/proxy-service-fw9kk-kdj9c:160/proxy/: foo (200; 3.369976ms) May 11 21:47:08.698: INFO: (15) /api/v1/namespaces/proxy-2497/pods/http:proxy-service-fw9kk-kdj9c:160/proxy/: foo (200; 3.467419ms) May 11 21:47:08.698: INFO: (15) /api/v1/namespaces/proxy-2497/pods/proxy-service-fw9kk-kdj9c/proxy/: test (200; 3.731309ms) May 11 21:47:08.698: INFO: (15) /api/v1/namespaces/proxy-2497/pods/https:proxy-service-fw9kk-kdj9c:460/proxy/: tls baz (200; 3.701443ms) May 11 21:47:08.698: INFO: (15) /api/v1/namespaces/proxy-2497/pods/http:proxy-service-fw9kk-kdj9c:1080/proxy/: ... (200; 3.796632ms) May 11 21:47:08.699: INFO: (15) /api/v1/namespaces/proxy-2497/pods/proxy-service-fw9kk-kdj9c:1080/proxy/: test<... (200; 3.838412ms) May 11 21:47:08.699: INFO: (15) /api/v1/namespaces/proxy-2497/pods/https:proxy-service-fw9kk-kdj9c:462/proxy/: tls qux (200; 3.828563ms) May 11 21:47:08.699: INFO: (15) /api/v1/namespaces/proxy-2497/pods/https:proxy-service-fw9kk-kdj9c:443/proxy/: test (200; 7.995723ms) May 11 21:47:08.709: INFO: (16) /api/v1/namespaces/proxy-2497/pods/proxy-service-fw9kk-kdj9c:162/proxy/: bar (200; 8.611769ms) May 11 21:47:08.709: INFO: (16) /api/v1/namespaces/proxy-2497/pods/http:proxy-service-fw9kk-kdj9c:160/proxy/: foo (200; 8.557431ms) May 11 21:47:08.710: INFO: (16) /api/v1/namespaces/proxy-2497/services/proxy-service-fw9kk:portname2/proxy/: bar (200; 9.347567ms) May 11 21:47:08.710: INFO: (16) /api/v1/namespaces/proxy-2497/services/http:proxy-service-fw9kk:portname2/proxy/: bar (200; 9.506266ms) May 11 21:47:08.711: INFO: (16) /api/v1/namespaces/proxy-2497/services/proxy-service-fw9kk:portname1/proxy/: foo (200; 10.231981ms) May 11 21:47:08.711: INFO: (16) /api/v1/namespaces/proxy-2497/pods/http:proxy-service-fw9kk-kdj9c:1080/proxy/: ... (200; 10.083024ms) May 11 21:47:08.711: INFO: (16) /api/v1/namespaces/proxy-2497/services/https:proxy-service-fw9kk:tlsportname1/proxy/: tls baz (200; 10.052196ms) May 11 21:47:08.711: INFO: (16) /api/v1/namespaces/proxy-2497/pods/https:proxy-service-fw9kk-kdj9c:443/proxy/: test<... (200; 12.313854ms) May 11 21:47:08.716: INFO: (17) /api/v1/namespaces/proxy-2497/pods/https:proxy-service-fw9kk-kdj9c:443/proxy/: ... (200; 4.097125ms) May 11 21:47:08.717: INFO: (17) /api/v1/namespaces/proxy-2497/pods/proxy-service-fw9kk-kdj9c:162/proxy/: bar (200; 4.273192ms) May 11 21:47:08.717: INFO: (17) /api/v1/namespaces/proxy-2497/pods/proxy-service-fw9kk-kdj9c/proxy/: test (200; 4.155768ms) May 11 21:47:08.717: INFO: (17) /api/v1/namespaces/proxy-2497/pods/http:proxy-service-fw9kk-kdj9c:160/proxy/: foo (200; 4.178985ms) May 11 21:47:08.717: INFO: (17) /api/v1/namespaces/proxy-2497/pods/proxy-service-fw9kk-kdj9c:1080/proxy/: test<... (200; 4.304072ms) May 11 21:47:08.717: INFO: (17) /api/v1/namespaces/proxy-2497/pods/https:proxy-service-fw9kk-kdj9c:462/proxy/: tls qux (200; 4.271319ms) May 11 21:47:08.717: INFO: (17) /api/v1/namespaces/proxy-2497/pods/https:proxy-service-fw9kk-kdj9c:460/proxy/: tls baz (200; 4.283719ms) May 11 21:47:08.717: INFO: (17) /api/v1/namespaces/proxy-2497/pods/http:proxy-service-fw9kk-kdj9c:162/proxy/: bar (200; 4.300669ms) May 11 21:47:08.717: INFO: (17) /api/v1/namespaces/proxy-2497/services/proxy-service-fw9kk:portname2/proxy/: bar (200; 4.283529ms) May 11 21:47:08.717: INFO: (17) /api/v1/namespaces/proxy-2497/pods/proxy-service-fw9kk-kdj9c:160/proxy/: foo (200; 4.262874ms) May 11 21:47:08.718: INFO: (17) /api/v1/namespaces/proxy-2497/services/http:proxy-service-fw9kk:portname2/proxy/: bar (200; 4.640008ms) May 11 21:47:08.718: INFO: (17) /api/v1/namespaces/proxy-2497/services/https:proxy-service-fw9kk:tlsportname1/proxy/: tls baz (200; 4.912455ms) May 11 21:47:08.718: INFO: (17) /api/v1/namespaces/proxy-2497/services/http:proxy-service-fw9kk:portname1/proxy/: foo (200; 5.005365ms) May 11 21:47:08.718: INFO: (17) /api/v1/namespaces/proxy-2497/services/https:proxy-service-fw9kk:tlsportname2/proxy/: tls qux (200; 5.030923ms) May 11 21:47:08.718: INFO: (17) /api/v1/namespaces/proxy-2497/services/proxy-service-fw9kk:portname1/proxy/: foo (200; 5.131304ms) May 11 21:47:08.722: INFO: (18) /api/v1/namespaces/proxy-2497/pods/http:proxy-service-fw9kk-kdj9c:160/proxy/: foo (200; 3.510023ms) May 11 21:47:08.722: INFO: (18) /api/v1/namespaces/proxy-2497/pods/http:proxy-service-fw9kk-kdj9c:1080/proxy/: ... (200; 3.499129ms) May 11 21:47:08.722: INFO: (18) /api/v1/namespaces/proxy-2497/pods/https:proxy-service-fw9kk-kdj9c:462/proxy/: tls qux (200; 3.565941ms) May 11 21:47:08.722: INFO: (18) /api/v1/namespaces/proxy-2497/pods/proxy-service-fw9kk-kdj9c:162/proxy/: bar (200; 3.638565ms) May 11 21:47:08.722: INFO: (18) /api/v1/namespaces/proxy-2497/pods/https:proxy-service-fw9kk-kdj9c:443/proxy/: test (200; 3.671647ms) May 11 21:47:08.722: INFO: (18) /api/v1/namespaces/proxy-2497/pods/proxy-service-fw9kk-kdj9c:1080/proxy/: test<... (200; 4.061041ms) May 11 21:47:08.722: INFO: (18) /api/v1/namespaces/proxy-2497/pods/http:proxy-service-fw9kk-kdj9c:162/proxy/: bar (200; 4.045798ms) May 11 21:47:08.722: INFO: (18) /api/v1/namespaces/proxy-2497/services/proxy-service-fw9kk:portname2/proxy/: bar (200; 4.126155ms) May 11 21:47:08.723: INFO: (18) /api/v1/namespaces/proxy-2497/services/proxy-service-fw9kk:portname1/proxy/: foo (200; 4.390888ms) May 11 21:47:08.723: INFO: (18) /api/v1/namespaces/proxy-2497/services/https:proxy-service-fw9kk:tlsportname1/proxy/: tls baz (200; 4.734407ms) May 11 21:47:08.723: INFO: (18) /api/v1/namespaces/proxy-2497/services/https:proxy-service-fw9kk:tlsportname2/proxy/: tls qux (200; 4.858844ms) May 11 21:47:08.723: INFO: (18) /api/v1/namespaces/proxy-2497/services/http:proxy-service-fw9kk:portname1/proxy/: foo (200; 4.945696ms) May 11 21:47:08.723: INFO: (18) /api/v1/namespaces/proxy-2497/services/http:proxy-service-fw9kk:portname2/proxy/: bar (200; 4.995672ms) May 11 21:47:08.727: INFO: (19) /api/v1/namespaces/proxy-2497/pods/proxy-service-fw9kk-kdj9c:160/proxy/: foo (200; 3.010183ms) May 11 21:47:08.727: INFO: (19) /api/v1/namespaces/proxy-2497/pods/proxy-service-fw9kk-kdj9c:162/proxy/: bar (200; 3.24228ms) May 11 21:47:08.727: INFO: (19) /api/v1/namespaces/proxy-2497/pods/http:proxy-service-fw9kk-kdj9c:1080/proxy/: ... (200; 2.801997ms) May 11 21:47:08.727: INFO: (19) /api/v1/namespaces/proxy-2497/services/http:proxy-service-fw9kk:portname2/proxy/: bar (200; 3.238424ms) May 11 21:47:08.727: INFO: (19) /api/v1/namespaces/proxy-2497/pods/http:proxy-service-fw9kk-kdj9c:160/proxy/: foo (200; 3.074327ms) May 11 21:47:08.727: INFO: (19) /api/v1/namespaces/proxy-2497/pods/http:proxy-service-fw9kk-kdj9c:162/proxy/: bar (200; 3.126365ms) May 11 21:47:08.727: INFO: (19) /api/v1/namespaces/proxy-2497/pods/proxy-service-fw9kk-kdj9c:1080/proxy/: test<... (200; 3.192047ms) May 11 21:47:08.727: INFO: (19) /api/v1/namespaces/proxy-2497/pods/https:proxy-service-fw9kk-kdj9c:462/proxy/: tls qux (200; 3.627038ms) May 11 21:47:08.727: INFO: (19) /api/v1/namespaces/proxy-2497/pods/https:proxy-service-fw9kk-kdj9c:443/proxy/: test (200; 3.742873ms) May 11 21:47:08.728: INFO: (19) /api/v1/namespaces/proxy-2497/services/https:proxy-service-fw9kk:tlsportname2/proxy/: tls qux (200; 4.120942ms) May 11 21:47:08.728: INFO: (19) /api/v1/namespaces/proxy-2497/services/http:proxy-service-fw9kk:portname1/proxy/: foo (200; 4.164615ms) May 11 21:47:08.728: INFO: (19) /api/v1/namespaces/proxy-2497/services/proxy-service-fw9kk:portname1/proxy/: foo (200; 3.898832ms) May 11 21:47:08.728: INFO: (19) /api/v1/namespaces/proxy-2497/services/https:proxy-service-fw9kk:tlsportname1/proxy/: tls baz (200; 3.887224ms) May 11 21:47:08.728: INFO: (19) /api/v1/namespaces/proxy-2497/services/proxy-service-fw9kk:portname2/proxy/: bar (200; 4.068209ms) STEP: deleting ReplicationController proxy-service-fw9kk in namespace proxy-2497, will wait for the garbage collector to delete the pods May 11 21:47:08.785: INFO: Deleting ReplicationController proxy-service-fw9kk took: 4.82723ms May 11 21:47:09.085: INFO: Terminating ReplicationController proxy-service-fw9kk pods took: 300.196594ms [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:47:11.885: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-2497" for this suite. • [SLOW TEST:11.365 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:57 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance]","total":278,"completed":200,"skipped":3277,"failed":0} [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:47:11.914: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with configMap that has name projected-configmap-test-upd-8ec90230-5820-4a06-8a91-761afdd83836 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-8ec90230-5820-4a06-8a91-761afdd83836 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:48:37.589: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8669" for this suite. • [SLOW TEST:85.680 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":201,"skipped":3277,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:48:37.595: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-b8b25d6d-18a5-4914-b7f0-3b63a9d16d96 STEP: Creating a pod to test consume secrets May 11 21:48:37.733: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-31f288c9-b3fc-420f-b316-0631014e1ec3" in namespace "projected-327" to be "success or failure" May 11 21:48:37.774: INFO: Pod "pod-projected-secrets-31f288c9-b3fc-420f-b316-0631014e1ec3": Phase="Pending", Reason="", readiness=false. Elapsed: 40.51061ms May 11 21:48:39.777: INFO: Pod "pod-projected-secrets-31f288c9-b3fc-420f-b316-0631014e1ec3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043839672s May 11 21:48:41.902: INFO: Pod "pod-projected-secrets-31f288c9-b3fc-420f-b316-0631014e1ec3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.168588196s May 11 21:48:43.905: INFO: Pod "pod-projected-secrets-31f288c9-b3fc-420f-b316-0631014e1ec3": Phase="Running", Reason="", readiness=true. Elapsed: 6.172222662s May 11 21:48:45.911: INFO: Pod "pod-projected-secrets-31f288c9-b3fc-420f-b316-0631014e1ec3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.177340778s STEP: Saw pod success May 11 21:48:45.911: INFO: Pod "pod-projected-secrets-31f288c9-b3fc-420f-b316-0631014e1ec3" satisfied condition "success or failure" May 11 21:48:45.914: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-31f288c9-b3fc-420f-b316-0631014e1ec3 container projected-secret-volume-test: STEP: delete the pod May 11 21:48:46.239: INFO: Waiting for pod pod-projected-secrets-31f288c9-b3fc-420f-b316-0631014e1ec3 to disappear May 11 21:48:46.290: INFO: Pod pod-projected-secrets-31f288c9-b3fc-420f-b316-0631014e1ec3 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:48:46.290: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-327" for this suite. • [SLOW TEST:8.739 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":202,"skipped":3327,"failed":0} SS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:48:46.334: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 11 21:48:47.353: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:48:48.704: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-9680" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":278,"completed":203,"skipped":3329,"failed":0} SSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:48:48.712: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-upd-5ab4677e-e9e5-4927-8263-a20f4217c596 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-5ab4677e-e9e5-4927-8263-a20f4217c596 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:50:08.170: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9615" for this suite. • [SLOW TEST:79.591 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":204,"skipped":3333,"failed":0} SSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:50:08.303: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Starting the proxy May 11 21:50:08.660: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix602239666/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:50:08.733: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3105" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":278,"completed":205,"skipped":3342,"failed":0} ------------------------------ [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:50:08.751: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4041 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-4041;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4041 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-4041;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4041.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-4041.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4041.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-4041.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-4041.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-4041.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-4041.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-4041.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-4041.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-4041.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-4041.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-4041.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4041.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 165.38.100.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.100.38.165_udp@PTR;check="$$(dig +tcp +noall +answer +search 165.38.100.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.100.38.165_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4041 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-4041;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4041 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-4041;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4041.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-4041.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4041.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-4041.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-4041.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-4041.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-4041.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-4041.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-4041.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-4041.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-4041.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-4041.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4041.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 165.38.100.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.100.38.165_udp@PTR;check="$$(dig +tcp +noall +answer +search 165.38.100.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.100.38.165_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 11 21:50:19.107: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4041/dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977: the server could not find the requested resource (get pods dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977) May 11 21:50:19.110: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4041/dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977: the server could not find the requested resource (get pods dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977) May 11 21:50:19.112: INFO: Unable to read wheezy_udp@dns-test-service.dns-4041 from pod dns-4041/dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977: the server could not find the requested resource (get pods dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977) May 11 21:50:19.115: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4041 from pod dns-4041/dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977: the server could not find the requested resource (get pods dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977) May 11 21:50:19.118: INFO: Unable to read wheezy_udp@dns-test-service.dns-4041.svc from pod dns-4041/dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977: the server could not find the requested resource (get pods dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977) May 11 21:50:19.120: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4041.svc from pod dns-4041/dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977: the server could not find the requested resource (get pods dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977) May 11 21:50:19.124: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4041.svc from pod dns-4041/dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977: the server could not find the requested resource (get pods dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977) May 11 21:50:19.127: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4041.svc from pod dns-4041/dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977: the server could not find the requested resource (get pods dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977) May 11 21:50:19.148: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4041/dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977: the server could not find the requested resource (get pods dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977) May 11 21:50:19.151: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4041/dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977: the server could not find the requested resource (get pods dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977) May 11 21:50:19.154: INFO: Unable to read jessie_udp@dns-test-service.dns-4041 from pod dns-4041/dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977: the server could not find the requested resource (get pods dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977) May 11 21:50:19.157: INFO: Unable to read jessie_tcp@dns-test-service.dns-4041 from pod dns-4041/dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977: the server could not find the requested resource (get pods dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977) May 11 21:50:19.160: INFO: Unable to read jessie_udp@dns-test-service.dns-4041.svc from pod dns-4041/dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977: the server could not find the requested resource (get pods dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977) May 11 21:50:19.163: INFO: Unable to read jessie_tcp@dns-test-service.dns-4041.svc from pod dns-4041/dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977: the server could not find the requested resource (get pods dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977) May 11 21:50:19.166: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4041.svc from pod dns-4041/dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977: the server could not find the requested resource (get pods dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977) May 11 21:50:19.169: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4041.svc from pod dns-4041/dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977: the server could not find the requested resource (get pods dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977) May 11 21:50:19.185: INFO: Lookups using dns-4041/dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4041 wheezy_tcp@dns-test-service.dns-4041 wheezy_udp@dns-test-service.dns-4041.svc wheezy_tcp@dns-test-service.dns-4041.svc wheezy_udp@_http._tcp.dns-test-service.dns-4041.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4041.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4041 jessie_tcp@dns-test-service.dns-4041 jessie_udp@dns-test-service.dns-4041.svc jessie_tcp@dns-test-service.dns-4041.svc jessie_udp@_http._tcp.dns-test-service.dns-4041.svc jessie_tcp@_http._tcp.dns-test-service.dns-4041.svc] May 11 21:50:24.330: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4041/dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977: the server could not find the requested resource (get pods dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977) May 11 21:50:24.334: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4041/dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977: the server could not find the requested resource (get pods dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977) May 11 21:50:24.338: INFO: Unable to read wheezy_udp@dns-test-service.dns-4041 from pod dns-4041/dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977: the server could not find the requested resource (get pods dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977) May 11 21:50:24.340: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4041 from pod dns-4041/dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977: the server could not find the requested resource (get pods dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977) May 11 21:50:24.343: INFO: Unable to read wheezy_udp@dns-test-service.dns-4041.svc from pod dns-4041/dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977: the server could not find the requested resource (get pods dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977) May 11 21:50:24.345: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4041.svc from pod dns-4041/dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977: the server could not find the requested resource (get pods dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977) May 11 21:50:24.347: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4041.svc from pod dns-4041/dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977: the server could not find the requested resource (get pods dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977) May 11 21:50:24.350: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4041.svc from pod dns-4041/dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977: the server could not find the requested resource (get pods dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977) May 11 21:50:24.365: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4041/dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977: the server could not find the requested resource (get pods dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977) May 11 21:50:24.368: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4041/dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977: the server could not find the requested resource (get pods dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977) May 11 21:50:24.370: INFO: Unable to read jessie_udp@dns-test-service.dns-4041 from pod dns-4041/dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977: the server could not find the requested resource (get pods dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977) May 11 21:50:24.372: INFO: Unable to read jessie_tcp@dns-test-service.dns-4041 from pod dns-4041/dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977: the server could not find the requested resource (get pods dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977) May 11 21:50:24.375: INFO: Unable to read jessie_udp@dns-test-service.dns-4041.svc from pod dns-4041/dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977: the server could not find the requested resource (get pods dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977) May 11 21:50:24.377: INFO: Unable to read jessie_tcp@dns-test-service.dns-4041.svc from pod dns-4041/dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977: the server could not find the requested resource (get pods dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977) May 11 21:50:24.379: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4041.svc from pod dns-4041/dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977: the server could not find the requested resource (get pods dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977) May 11 21:50:24.381: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4041.svc from pod dns-4041/dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977: the server could not find the requested resource (get pods dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977) May 11 21:50:24.394: INFO: Lookups using dns-4041/dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4041 wheezy_tcp@dns-test-service.dns-4041 wheezy_udp@dns-test-service.dns-4041.svc wheezy_tcp@dns-test-service.dns-4041.svc wheezy_udp@_http._tcp.dns-test-service.dns-4041.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4041.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4041 jessie_tcp@dns-test-service.dns-4041 jessie_udp@dns-test-service.dns-4041.svc jessie_tcp@dns-test-service.dns-4041.svc jessie_udp@_http._tcp.dns-test-service.dns-4041.svc jessie_tcp@_http._tcp.dns-test-service.dns-4041.svc] May 11 21:50:29.189: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4041/dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977: the server could not find the requested resource (get pods dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977) May 11 21:50:29.192: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4041/dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977: the server could not find the requested resource (get pods dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977) May 11 21:50:29.195: INFO: Unable to read wheezy_udp@dns-test-service.dns-4041 from pod dns-4041/dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977: the server could not find the requested resource (get pods dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977) May 11 21:50:29.197: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4041 from pod dns-4041/dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977: the server could not find the requested resource (get pods dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977) May 11 21:50:29.199: INFO: Unable to read wheezy_udp@dns-test-service.dns-4041.svc from pod dns-4041/dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977: the server could not find the requested resource (get pods dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977) May 11 21:50:29.201: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4041.svc from pod dns-4041/dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977: the server could not find the requested resource (get pods dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977) May 11 21:50:29.203: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4041.svc from pod dns-4041/dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977: the server could not find the requested resource (get pods dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977) May 11 21:50:29.206: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4041.svc from pod dns-4041/dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977: the server could not find the requested resource (get pods dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977) May 11 21:50:29.409: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4041/dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977: the server could not find the requested resource (get pods dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977) May 11 21:50:29.412: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4041/dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977: the server could not find the requested resource (get pods dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977) May 11 21:50:29.415: INFO: Unable to read jessie_udp@dns-test-service.dns-4041 from pod dns-4041/dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977: the server could not find the requested resource (get pods dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977) May 11 21:50:29.417: INFO: Unable to read jessie_tcp@dns-test-service.dns-4041 from pod dns-4041/dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977: the server could not find the requested resource (get pods dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977) May 11 21:50:29.420: INFO: Unable to read jessie_udp@dns-test-service.dns-4041.svc from pod dns-4041/dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977: the server could not find the requested resource (get pods dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977) May 11 21:50:29.422: INFO: Unable to read jessie_tcp@dns-test-service.dns-4041.svc from pod dns-4041/dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977: the server could not find the requested resource (get pods dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977) May 11 21:50:29.424: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4041.svc from pod dns-4041/dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977: the server could not find the requested resource (get pods dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977) May 11 21:50:29.426: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4041.svc from pod dns-4041/dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977: the server could not find the requested resource (get pods dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977) May 11 21:50:29.438: INFO: Lookups using dns-4041/dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4041 wheezy_tcp@dns-test-service.dns-4041 wheezy_udp@dns-test-service.dns-4041.svc wheezy_tcp@dns-test-service.dns-4041.svc wheezy_udp@_http._tcp.dns-test-service.dns-4041.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4041.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4041 jessie_tcp@dns-test-service.dns-4041 jessie_udp@dns-test-service.dns-4041.svc jessie_tcp@dns-test-service.dns-4041.svc jessie_udp@_http._tcp.dns-test-service.dns-4041.svc jessie_tcp@_http._tcp.dns-test-service.dns-4041.svc] May 11 21:50:34.190: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4041/dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977: the server could not find the requested resource (get pods dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977) May 11 21:50:34.193: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4041/dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977: the server could not find the requested resource (get pods dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977) May 11 21:50:34.195: INFO: Unable to read wheezy_udp@dns-test-service.dns-4041 from pod dns-4041/dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977: the server could not find the requested resource (get pods dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977) May 11 21:50:34.197: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4041 from pod dns-4041/dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977: the server could not find the requested resource (get pods dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977) May 11 21:50:34.199: INFO: Unable to read wheezy_udp@dns-test-service.dns-4041.svc from pod dns-4041/dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977: the server could not find the requested resource (get pods dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977) May 11 21:50:34.201: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4041.svc from pod dns-4041/dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977: the server could not find the requested resource (get pods dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977) May 11 21:50:34.203: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4041.svc from pod dns-4041/dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977: the server could not find the requested resource (get pods dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977) May 11 21:50:34.204: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4041.svc from pod dns-4041/dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977: the server could not find the requested resource (get pods dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977) May 11 21:50:34.222: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4041/dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977: the server could not find the requested resource (get pods dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977) May 11 21:50:34.224: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4041/dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977: the server could not find the requested resource (get pods dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977) May 11 21:50:34.227: INFO: Unable to read jessie_udp@dns-test-service.dns-4041 from pod dns-4041/dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977: the server could not find the requested resource (get pods dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977) May 11 21:50:34.228: INFO: Unable to read jessie_tcp@dns-test-service.dns-4041 from pod dns-4041/dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977: the server could not find the requested resource (get pods dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977) May 11 21:50:34.231: INFO: Unable to read jessie_udp@dns-test-service.dns-4041.svc from pod dns-4041/dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977: the server could not find the requested resource (get pods dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977) May 11 21:50:34.232: INFO: Unable to read jessie_tcp@dns-test-service.dns-4041.svc from pod dns-4041/dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977: the server could not find the requested resource (get pods dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977) May 11 21:50:34.234: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4041.svc from pod dns-4041/dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977: the server could not find the requested resource (get pods dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977) May 11 21:50:34.236: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4041.svc from pod dns-4041/dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977: the server could not find the requested resource (get pods dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977) May 11 21:50:34.250: INFO: Lookups using dns-4041/dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4041 wheezy_tcp@dns-test-service.dns-4041 wheezy_udp@dns-test-service.dns-4041.svc wheezy_tcp@dns-test-service.dns-4041.svc wheezy_udp@_http._tcp.dns-test-service.dns-4041.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4041.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4041 jessie_tcp@dns-test-service.dns-4041 jessie_udp@dns-test-service.dns-4041.svc jessie_tcp@dns-test-service.dns-4041.svc jessie_udp@_http._tcp.dns-test-service.dns-4041.svc jessie_tcp@_http._tcp.dns-test-service.dns-4041.svc] May 11 21:50:39.189: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4041/dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977: the server could not find the requested resource (get pods dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977) May 11 21:50:39.217: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4041/dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977: the server could not find the requested resource (get pods dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977) May 11 21:50:39.219: INFO: Unable to read wheezy_udp@dns-test-service.dns-4041 from pod dns-4041/dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977: the server could not find the requested resource (get pods dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977) May 11 21:50:39.222: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4041 from pod dns-4041/dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977: the server could not find the requested resource (get pods dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977) May 11 21:50:39.223: INFO: Unable to read wheezy_udp@dns-test-service.dns-4041.svc from pod dns-4041/dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977: the server could not find the requested resource (get pods dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977) May 11 21:50:39.225: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4041.svc from pod dns-4041/dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977: the server could not find the requested resource (get pods dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977) May 11 21:50:39.226: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4041.svc from pod dns-4041/dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977: the server could not find the requested resource (get pods dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977) May 11 21:50:39.228: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4041.svc from pod dns-4041/dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977: the server could not find the requested resource (get pods dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977) May 11 21:50:39.240: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4041/dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977: the server could not find the requested resource (get pods dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977) May 11 21:50:39.242: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4041/dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977: the server could not find the requested resource (get pods dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977) May 11 21:50:39.244: INFO: Unable to read jessie_udp@dns-test-service.dns-4041 from pod dns-4041/dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977: the server could not find the requested resource (get pods dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977) May 11 21:50:39.247: INFO: Unable to read jessie_tcp@dns-test-service.dns-4041 from pod dns-4041/dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977: the server could not find the requested resource (get pods dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977) May 11 21:50:39.248: INFO: Unable to read jessie_udp@dns-test-service.dns-4041.svc from pod dns-4041/dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977: the server could not find the requested resource (get pods dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977) May 11 21:50:39.250: INFO: Unable to read jessie_tcp@dns-test-service.dns-4041.svc from pod dns-4041/dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977: the server could not find the requested resource (get pods dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977) May 11 21:50:39.252: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4041.svc from pod dns-4041/dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977: the server could not find the requested resource (get pods dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977) May 11 21:50:39.255: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4041.svc from pod dns-4041/dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977: the server could not find the requested resource (get pods dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977) May 11 21:50:39.271: INFO: Lookups using dns-4041/dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4041 wheezy_tcp@dns-test-service.dns-4041 wheezy_udp@dns-test-service.dns-4041.svc wheezy_tcp@dns-test-service.dns-4041.svc wheezy_udp@_http._tcp.dns-test-service.dns-4041.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4041.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4041 jessie_tcp@dns-test-service.dns-4041 jessie_udp@dns-test-service.dns-4041.svc jessie_tcp@dns-test-service.dns-4041.svc jessie_udp@_http._tcp.dns-test-service.dns-4041.svc jessie_tcp@_http._tcp.dns-test-service.dns-4041.svc] May 11 21:50:44.190: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4041/dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977: the server could not find the requested resource (get pods dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977) May 11 21:50:44.194: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4041/dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977: the server could not find the requested resource (get pods dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977) May 11 21:50:44.197: INFO: Unable to read wheezy_udp@dns-test-service.dns-4041 from pod dns-4041/dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977: the server could not find the requested resource (get pods dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977) May 11 21:50:44.200: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4041 from pod dns-4041/dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977: the server could not find the requested resource (get pods dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977) May 11 21:50:44.203: INFO: Unable to read wheezy_udp@dns-test-service.dns-4041.svc from pod dns-4041/dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977: the server could not find the requested resource (get pods dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977) May 11 21:50:44.206: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4041.svc from pod dns-4041/dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977: the server could not find the requested resource (get pods dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977) May 11 21:50:44.209: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4041.svc from pod dns-4041/dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977: the server could not find the requested resource (get pods dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977) May 11 21:50:44.212: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4041.svc from pod dns-4041/dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977: the server could not find the requested resource (get pods dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977) May 11 21:50:44.231: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4041/dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977: the server could not find the requested resource (get pods dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977) May 11 21:50:44.234: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4041/dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977: the server could not find the requested resource (get pods dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977) May 11 21:50:44.236: INFO: Unable to read jessie_udp@dns-test-service.dns-4041 from pod dns-4041/dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977: the server could not find the requested resource (get pods dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977) May 11 21:50:44.239: INFO: Unable to read jessie_tcp@dns-test-service.dns-4041 from pod dns-4041/dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977: the server could not find the requested resource (get pods dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977) May 11 21:50:44.242: INFO: Unable to read jessie_udp@dns-test-service.dns-4041.svc from pod dns-4041/dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977: the server could not find the requested resource (get pods dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977) May 11 21:50:44.244: INFO: Unable to read jessie_tcp@dns-test-service.dns-4041.svc from pod dns-4041/dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977: the server could not find the requested resource (get pods dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977) May 11 21:50:44.248: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4041.svc from pod dns-4041/dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977: the server could not find the requested resource (get pods dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977) May 11 21:50:44.251: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4041.svc from pod dns-4041/dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977: the server could not find the requested resource (get pods dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977) May 11 21:50:44.271: INFO: Lookups using dns-4041/dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4041 wheezy_tcp@dns-test-service.dns-4041 wheezy_udp@dns-test-service.dns-4041.svc wheezy_tcp@dns-test-service.dns-4041.svc wheezy_udp@_http._tcp.dns-test-service.dns-4041.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4041.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4041 jessie_tcp@dns-test-service.dns-4041 jessie_udp@dns-test-service.dns-4041.svc jessie_tcp@dns-test-service.dns-4041.svc jessie_udp@_http._tcp.dns-test-service.dns-4041.svc jessie_tcp@_http._tcp.dns-test-service.dns-4041.svc] May 11 21:50:49.872: INFO: DNS probes using dns-4041/dns-test-70ba44af-25b9-441d-8d87-39c8d3f09977 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:50:52.335: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4041" for this suite. • [SLOW TEST:43.823 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":278,"completed":206,"skipped":3342,"failed":0} SS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:50:52.574: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-map-7573062e-3019-4d40-8b51-50c9827f1ff4 STEP: Creating a pod to test consume secrets May 11 21:50:53.145: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-95a0fb5f-bd86-4768-a59f-f7c59a02d259" in namespace "projected-6072" to be "success or failure" May 11 21:50:53.172: INFO: Pod "pod-projected-secrets-95a0fb5f-bd86-4768-a59f-f7c59a02d259": Phase="Pending", Reason="", readiness=false. Elapsed: 27.246348ms May 11 21:50:55.179: INFO: Pod "pod-projected-secrets-95a0fb5f-bd86-4768-a59f-f7c59a02d259": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034423345s May 11 21:50:57.484: INFO: Pod "pod-projected-secrets-95a0fb5f-bd86-4768-a59f-f7c59a02d259": Phase="Pending", Reason="", readiness=false. Elapsed: 4.338815973s May 11 21:50:59.487: INFO: Pod "pod-projected-secrets-95a0fb5f-bd86-4768-a59f-f7c59a02d259": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.342164276s STEP: Saw pod success May 11 21:50:59.487: INFO: Pod "pod-projected-secrets-95a0fb5f-bd86-4768-a59f-f7c59a02d259" satisfied condition "success or failure" May 11 21:50:59.489: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-95a0fb5f-bd86-4768-a59f-f7c59a02d259 container projected-secret-volume-test: STEP: delete the pod May 11 21:50:59.564: INFO: Waiting for pod pod-projected-secrets-95a0fb5f-bd86-4768-a59f-f7c59a02d259 to disappear May 11 21:50:59.567: INFO: Pod pod-projected-secrets-95a0fb5f-bd86-4768-a59f-f7c59a02d259 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:50:59.567: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6072" for this suite. • [SLOW TEST:7.000 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":207,"skipped":3344,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:50:59.575: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:51:07.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-6311" for this suite. • [SLOW TEST:8.168 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":278,"completed":208,"skipped":3392,"failed":0} SSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:51:07.743: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 11 21:51:14.815: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:51:15.354: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-6957" for this suite. • [SLOW TEST:7.677 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131 should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":209,"skipped":3398,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:51:15.421: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-6ff890a9-0d82-4ecc-ba1f-546543f1b462 STEP: Creating a pod to test consume configMaps May 11 21:51:15.875: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-a23772ab-dd5f-4faa-b69e-816f80e87a66" in namespace "projected-445" to be "success or failure" May 11 21:51:15.915: INFO: Pod "pod-projected-configmaps-a23772ab-dd5f-4faa-b69e-816f80e87a66": Phase="Pending", Reason="", readiness=false. Elapsed: 40.158173ms May 11 21:51:17.918: INFO: Pod "pod-projected-configmaps-a23772ab-dd5f-4faa-b69e-816f80e87a66": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043363517s May 11 21:51:19.922: INFO: Pod "pod-projected-configmaps-a23772ab-dd5f-4faa-b69e-816f80e87a66": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047833218s May 11 21:51:21.951: INFO: Pod "pod-projected-configmaps-a23772ab-dd5f-4faa-b69e-816f80e87a66": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.076354383s STEP: Saw pod success May 11 21:51:21.951: INFO: Pod "pod-projected-configmaps-a23772ab-dd5f-4faa-b69e-816f80e87a66" satisfied condition "success or failure" May 11 21:51:21.954: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-a23772ab-dd5f-4faa-b69e-816f80e87a66 container projected-configmap-volume-test: STEP: delete the pod May 11 21:51:21.976: INFO: Waiting for pod pod-projected-configmaps-a23772ab-dd5f-4faa-b69e-816f80e87a66 to disappear May 11 21:51:21.986: INFO: Pod pod-projected-configmaps-a23772ab-dd5f-4faa-b69e-816f80e87a66 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:51:21.986: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-445" for this suite. • [SLOW TEST:6.573 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":210,"skipped":3404,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:51:21.994: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating pod May 11 21:51:26.260: INFO: Pod pod-hostip-dadcd506-2c9d-4501-b10c-1a7a31fe6bde has hostIP: 172.17.0.10 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:51:26.260: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3126" for this suite. •{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":278,"completed":211,"skipped":3432,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:51:26.265: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:51:43.389: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-5099" for this suite. • [SLOW TEST:17.497 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":278,"completed":212,"skipped":3445,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:51:43.763: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready May 11 21:51:47.300: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set May 11 21:51:49.580: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724830707, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724830707, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724830707, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724830707, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 21:51:51.584: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724830707, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724830707, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724830707, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724830707, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 11 21:51:54.663: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 11 21:51:54.761: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:51:56.447: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-2428" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136 • [SLOW TEST:13.513 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":278,"completed":213,"skipped":3458,"failed":0} S ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:51:57.276: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating replication controller my-hostname-basic-68d75425-a735-472f-b89f-4678b41c1934 May 11 21:51:57.420: INFO: Pod name my-hostname-basic-68d75425-a735-472f-b89f-4678b41c1934: Found 0 pods out of 1 May 11 21:52:02.425: INFO: Pod name my-hostname-basic-68d75425-a735-472f-b89f-4678b41c1934: Found 1 pods out of 1 May 11 21:52:02.425: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-68d75425-a735-472f-b89f-4678b41c1934" are running May 11 21:52:04.431: INFO: Pod "my-hostname-basic-68d75425-a735-472f-b89f-4678b41c1934-tqsz7" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-11 21:51:57 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-11 21:51:57 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-68d75425-a735-472f-b89f-4678b41c1934]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-11 21:51:57 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-68d75425-a735-472f-b89f-4678b41c1934]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-11 21:51:57 +0000 UTC Reason: Message:}]) May 11 21:52:04.431: INFO: Trying to dial the pod May 11 21:52:09.443: INFO: Controller my-hostname-basic-68d75425-a735-472f-b89f-4678b41c1934: Got expected result from replica 1 [my-hostname-basic-68d75425-a735-472f-b89f-4678b41c1934-tqsz7]: "my-hostname-basic-68d75425-a735-472f-b89f-4678b41c1934-tqsz7", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:52:09.443: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-8294" for this suite. • [SLOW TEST:12.175 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":278,"completed":214,"skipped":3459,"failed":0} SSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:52:09.452: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-2f1e2707-9a2b-456e-9efb-846e82d064ab STEP: Creating a pod to test consume configMaps May 11 21:52:10.257: INFO: Waiting up to 5m0s for pod "pod-configmaps-6c1bfef0-9a24-42ec-bc13-7f6643d3a055" in namespace "configmap-750" to be "success or failure" May 11 21:52:10.287: INFO: Pod "pod-configmaps-6c1bfef0-9a24-42ec-bc13-7f6643d3a055": Phase="Pending", Reason="", readiness=false. Elapsed: 29.800626ms May 11 21:52:13.374: INFO: Pod "pod-configmaps-6c1bfef0-9a24-42ec-bc13-7f6643d3a055": Phase="Pending", Reason="", readiness=false. Elapsed: 3.117166036s May 11 21:52:15.486: INFO: Pod "pod-configmaps-6c1bfef0-9a24-42ec-bc13-7f6643d3a055": Phase="Pending", Reason="", readiness=false. Elapsed: 5.228749864s May 11 21:52:17.489: INFO: Pod "pod-configmaps-6c1bfef0-9a24-42ec-bc13-7f6643d3a055": Phase="Succeeded", Reason="", readiness=false. Elapsed: 7.232189768s STEP: Saw pod success May 11 21:52:17.489: INFO: Pod "pod-configmaps-6c1bfef0-9a24-42ec-bc13-7f6643d3a055" satisfied condition "success or failure" May 11 21:52:17.491: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-6c1bfef0-9a24-42ec-bc13-7f6643d3a055 container configmap-volume-test: STEP: delete the pod May 11 21:52:17.539: INFO: Waiting for pod pod-configmaps-6c1bfef0-9a24-42ec-bc13-7f6643d3a055 to disappear May 11 21:52:17.701: INFO: Pod pod-configmaps-6c1bfef0-9a24-42ec-bc13-7f6643d3a055 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:52:17.701: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-750" for this suite. • [SLOW TEST:8.257 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":215,"skipped":3466,"failed":0} SSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:52:17.709: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service multi-endpoint-test in namespace services-3797 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3797 to expose endpoints map[] May 11 21:52:18.313: INFO: Get endpoints failed (18.637348ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found May 11 21:52:19.316: INFO: successfully validated that service multi-endpoint-test in namespace services-3797 exposes endpoints map[] (1.022070804s elapsed) STEP: Creating pod pod1 in namespace services-3797 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3797 to expose endpoints map[pod1:[100]] May 11 21:52:24.242: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.918559794s elapsed, will retry) May 11 21:52:26.369: INFO: successfully validated that service multi-endpoint-test in namespace services-3797 exposes endpoints map[pod1:[100]] (7.045905232s elapsed) STEP: Creating pod pod2 in namespace services-3797 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3797 to expose endpoints map[pod1:[100] pod2:[101]] May 11 21:52:31.146: INFO: Unexpected endpoints: found map[93888877-f8e0-4214-86d1-4769646e59bc:[100]], expected map[pod1:[100] pod2:[101]] (4.772972318s elapsed, will retry) May 11 21:52:32.263: INFO: successfully validated that service multi-endpoint-test in namespace services-3797 exposes endpoints map[pod1:[100] pod2:[101]] (5.889633352s elapsed) STEP: Deleting pod pod1 in namespace services-3797 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3797 to expose endpoints map[pod2:[101]] May 11 21:52:33.422: INFO: successfully validated that service multi-endpoint-test in namespace services-3797 exposes endpoints map[pod2:[101]] (1.155276323s elapsed) STEP: Deleting pod pod2 in namespace services-3797 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3797 to expose endpoints map[] May 11 21:52:34.484: INFO: successfully validated that service multi-endpoint-test in namespace services-3797 exposes endpoints map[] (1.057580179s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:52:34.911: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3797" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:17.274 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":278,"completed":216,"skipped":3473,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:52:34.984: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1681 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 11 21:52:35.192: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-4555' May 11 21:52:46.258: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 11 21:52:46.258: INFO: stdout: "job.batch/e2e-test-httpd-job created\n" STEP: verifying the job e2e-test-httpd-job was created [AfterEach] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1686 May 11 21:52:46.330: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-httpd-job --namespace=kubectl-4555' May 11 21:52:46.932: INFO: stderr: "" May 11 21:52:46.932: INFO: stdout: "job.batch \"e2e-test-httpd-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:52:46.932: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4555" for this suite. • [SLOW TEST:11.959 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1677 should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure [Conformance]","total":278,"completed":217,"skipped":3521,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:52:46.943: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-8246 STEP: creating a selector STEP: Creating the service pods in kubernetes May 11 21:52:47.356: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 11 21:53:15.905: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.242 8081 | grep -v '^\s*$'] Namespace:pod-network-test-8246 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 11 21:53:15.905: INFO: >>> kubeConfig: /root/.kube/config I0511 21:53:15.936519 6 log.go:172] (0xc000f8c0b0) (0xc0024800a0) Create stream I0511 21:53:15.936564 6 log.go:172] (0xc000f8c0b0) (0xc0024800a0) Stream added, broadcasting: 1 I0511 21:53:15.940326 6 log.go:172] (0xc000f8c0b0) Reply frame received for 1 I0511 21:53:15.940361 6 log.go:172] (0xc000f8c0b0) (0xc002480140) Create stream I0511 21:53:15.940370 6 log.go:172] (0xc000f8c0b0) (0xc002480140) Stream added, broadcasting: 3 I0511 21:53:15.941458 6 log.go:172] (0xc000f8c0b0) Reply frame received for 3 I0511 21:53:15.941497 6 log.go:172] (0xc000f8c0b0) (0xc0029f9400) Create stream I0511 21:53:15.941507 6 log.go:172] (0xc000f8c0b0) (0xc0029f9400) Stream added, broadcasting: 5 I0511 21:53:15.942865 6 log.go:172] (0xc000f8c0b0) Reply frame received for 5 I0511 21:53:17.030758 6 log.go:172] (0xc000f8c0b0) Data frame received for 3 I0511 21:53:17.030793 6 log.go:172] (0xc002480140) (3) Data frame handling I0511 21:53:17.030807 6 log.go:172] (0xc002480140) (3) Data frame sent I0511 21:53:17.030820 6 log.go:172] (0xc000f8c0b0) Data frame received for 3 I0511 21:53:17.030838 6 log.go:172] (0xc002480140) (3) Data frame handling I0511 21:53:17.031582 6 log.go:172] (0xc000f8c0b0) Data frame received for 5 I0511 21:53:17.031606 6 log.go:172] (0xc0029f9400) (5) Data frame handling I0511 21:53:17.033050 6 log.go:172] (0xc000f8c0b0) Data frame received for 1 I0511 21:53:17.033107 6 log.go:172] (0xc0024800a0) (1) Data frame handling I0511 21:53:17.033438 6 log.go:172] (0xc0024800a0) (1) Data frame sent I0511 21:53:17.033482 6 log.go:172] (0xc000f8c0b0) (0xc0024800a0) Stream removed, broadcasting: 1 I0511 21:53:17.033541 6 log.go:172] (0xc000f8c0b0) Go away received I0511 21:53:17.033605 6 log.go:172] (0xc000f8c0b0) (0xc0024800a0) Stream removed, broadcasting: 1 I0511 21:53:17.033631 6 log.go:172] (0xc000f8c0b0) (0xc002480140) Stream removed, broadcasting: 3 I0511 21:53:17.033646 6 log.go:172] (0xc000f8c0b0) (0xc0029f9400) Stream removed, broadcasting: 5 May 11 21:53:17.033: INFO: Found all expected endpoints: [netserver-0] May 11 21:53:17.036: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.175 8081 | grep -v '^\s*$'] Namespace:pod-network-test-8246 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 11 21:53:17.036: INFO: >>> kubeConfig: /root/.kube/config I0511 21:53:17.061074 6 log.go:172] (0xc00173a4d0) (0xc0024680a0) Create stream I0511 21:53:17.061096 6 log.go:172] (0xc00173a4d0) (0xc0024680a0) Stream added, broadcasting: 1 I0511 21:53:17.062839 6 log.go:172] (0xc00173a4d0) Reply frame received for 1 I0511 21:53:17.062870 6 log.go:172] (0xc00173a4d0) (0xc0024801e0) Create stream I0511 21:53:17.062884 6 log.go:172] (0xc00173a4d0) (0xc0024801e0) Stream added, broadcasting: 3 I0511 21:53:17.063666 6 log.go:172] (0xc00173a4d0) Reply frame received for 3 I0511 21:53:17.063691 6 log.go:172] (0xc00173a4d0) (0xc00197abe0) Create stream I0511 21:53:17.063710 6 log.go:172] (0xc00173a4d0) (0xc00197abe0) Stream added, broadcasting: 5 I0511 21:53:17.064435 6 log.go:172] (0xc00173a4d0) Reply frame received for 5 I0511 21:53:18.152424 6 log.go:172] (0xc00173a4d0) Data frame received for 3 I0511 21:53:18.152446 6 log.go:172] (0xc0024801e0) (3) Data frame handling I0511 21:53:18.152466 6 log.go:172] (0xc0024801e0) (3) Data frame sent I0511 21:53:18.152504 6 log.go:172] (0xc00173a4d0) Data frame received for 3 I0511 21:53:18.152525 6 log.go:172] (0xc0024801e0) (3) Data frame handling I0511 21:53:18.152673 6 log.go:172] (0xc00173a4d0) Data frame received for 5 I0511 21:53:18.152687 6 log.go:172] (0xc00197abe0) (5) Data frame handling I0511 21:53:18.155523 6 log.go:172] (0xc00173a4d0) Data frame received for 1 I0511 21:53:18.155545 6 log.go:172] (0xc0024680a0) (1) Data frame handling I0511 21:53:18.155557 6 log.go:172] (0xc0024680a0) (1) Data frame sent I0511 21:53:18.155573 6 log.go:172] (0xc00173a4d0) (0xc0024680a0) Stream removed, broadcasting: 1 I0511 21:53:18.155592 6 log.go:172] (0xc00173a4d0) Go away received I0511 21:53:18.155683 6 log.go:172] (0xc00173a4d0) (0xc0024680a0) Stream removed, broadcasting: 1 I0511 21:53:18.155695 6 log.go:172] (0xc00173a4d0) (0xc0024801e0) Stream removed, broadcasting: 3 I0511 21:53:18.155703 6 log.go:172] (0xc00173a4d0) (0xc00197abe0) Stream removed, broadcasting: 5 May 11 21:53:18.155: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:53:18.155: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-8246" for this suite. • [SLOW TEST:31.219 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":218,"skipped":3547,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:53:18.163: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 11 21:53:20.750: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 11 21:53:22.759: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724830800, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724830800, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724830801, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724830800, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 21:53:24.959: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724830800, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724830800, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724830801, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724830800, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 21:53:27.074: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724830800, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724830800, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724830801, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724830800, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 21:53:28.762: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724830800, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724830800, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724830801, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724830800, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 11 21:53:31.810: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:53:41.937: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3393" for this suite. STEP: Destroying namespace "webhook-3393-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:23.881 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":278,"completed":219,"skipped":3586,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:53:42.044: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod May 11 21:53:42.139: INFO: PodSpec: initContainers in spec.initContainers May 11 21:54:34.571: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-1bee5320-a4b3-4668-9d3e-73dd6cfe3785", GenerateName:"", Namespace:"init-container-8051", SelfLink:"/api/v1/namespaces/init-container-8051/pods/pod-init-1bee5320-a4b3-4668-9d3e-73dd6cfe3785", UID:"33aa1ef9-e05d-42ba-962c-803f3291b875", ResourceVersion:"15367003", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63724830822, loc:(*time.Location)(0x78ee0c0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"139056113"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-8dnvf", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc005f9c040), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-8dnvf", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-8dnvf", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-8dnvf", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0037d2068), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"jerma-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002d5a000), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0037d20f0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0037d2120)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0037d2128), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0037d212c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724830822, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724830822, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724830822, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724830822, loc:(*time.Location)(0x78ee0c0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.10", PodIP:"10.244.1.243", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.1.243"}}, StartTime:(*v1.Time)(0xc0037b6080), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00089e2a0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00089e310)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://e9e7225adcc3e99cf095d4c49b9a19dd7dfa435399b9fec25eb549456c15006d", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0037b60c0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0037b60a0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:"", Started:(*bool)(0xc0037d21af)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 21:54:34.571: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-8051" for this suite. • [SLOW TEST:52.547 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":278,"completed":220,"skipped":3599,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 21:54:34.592: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-7643 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating stateful set ss in namespace statefulset-7643 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-7643 May 11 21:54:34.924: INFO: Found 0 stateful pods, waiting for 1 May 11 21:54:44.928: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod May 11 21:54:44.931: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7643 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 11 21:54:45.225: INFO: stderr: "I0511 21:54:45.049045 2836 log.go:172] (0xc0009426e0) (0xc00098e000) Create stream\nI0511 21:54:45.049087 2836 log.go:172] (0xc0009426e0) (0xc00098e000) Stream added, broadcasting: 1\nI0511 21:54:45.052333 2836 log.go:172] (0xc0009426e0) Reply frame received for 1\nI0511 21:54:45.052371 2836 log.go:172] (0xc0009426e0) (0xc00066dae0) Create stream\nI0511 21:54:45.052382 2836 log.go:172] (0xc0009426e0) (0xc00066dae0) Stream added, broadcasting: 3\nI0511 21:54:45.053449 2836 log.go:172] (0xc0009426e0) Reply frame received for 3\nI0511 21:54:45.053475 2836 log.go:172] (0xc0009426e0) (0xc0002b0000) Create stream\nI0511 21:54:45.053485 2836 log.go:172] (0xc0009426e0) (0xc0002b0000) Stream added, broadcasting: 5\nI0511 21:54:45.054262 2836 log.go:172] (0xc0009426e0) Reply frame received for 5\nI0511 21:54:45.119269 2836 log.go:172] (0xc0009426e0) Data frame received for 5\nI0511 21:54:45.119289 2836 log.go:172] (0xc0002b0000) (5) Data frame handling\nI0511 21:54:45.119299 2836 log.go:172] (0xc0002b0000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0511 21:54:45.218723 2836 log.go:172] (0xc0009426e0) Data frame received for 3\nI0511 21:54:45.218749 2836 log.go:172] (0xc00066dae0) (3) Data frame handling\nI0511 21:54:45.218761 2836 log.go:172] (0xc00066dae0) (3) Data frame sent\nI0511 21:54:45.218767 2836 log.go:172] (0xc0009426e0) Data frame received for 3\nI0511 21:54:45.218771 2836 log.go:172] (0xc00066dae0) (3) Data frame handling\nI0511 21:54:45.218831 2836 log.go:172] (0xc0009426e0) Data frame received for 5\nI0511 21:54:45.218848 2836 log.go:172] (0xc0002b0000) (5) Data frame handling\nI0511 21:54:45.220961 2836 log.go:172] (0xc0009426e0) Data frame received for 1\nI0511 21:54:45.220989 2836 log.go:172] (0xc00098e000) (1) Data frame handling\nI0511 21:54:45.221001 2836 log.go:172] (0xc00098e000) (1) Data frame sent\nI0511 21:54:45.221010 2836 log.go:172] (0xc0009426e0) (0xc00098e000) Stream removed, broadcasting: 1\nI0511 21:54:45.221393 2836 log.go:172] (0xc0009426e0) (0xc00098e000) Stream removed, broadcasting: 1\nI0511 21:54:45.221411 2836 log.go:172] (0xc0009426e0) (0xc00066dae0) Stream removed, broadcasting: 3\nI0511 21:54:45.221420 2836 log.go:172] (0xc0009426e0) (0xc0002b0000) Stream removed, broadcasting: 5\n" May 11 21:54:45.225: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 11 21:54:45.225: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 11 21:54:45.229: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 11 21:54:55.244: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 11 21:54:55.244: INFO: Waiting for statefulset status.replicas updated to 0 May 11 21:54:55.404: INFO: POD NODE PHASE GRACE CONDITIONS May 11 21:54:55.404: INFO: ss-0 jerma-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:54:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:54:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:54:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:54:34 +0000 UTC }] May 11 21:54:55.405: INFO: May 11 21:54:55.405: INFO: StatefulSet ss has not reached scale 3, at 1 May 11 21:54:56.408: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.865981972s May 11 21:54:57.709: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.862608385s May 11 21:54:58.713: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.561066633s May 11 21:54:59.823: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.557500236s May 11 21:55:00.985: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.447120294s May 11 21:55:01.991: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.285490631s May 11 21:55:02.997: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.278900506s May 11 21:55:04.022: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.273509052s May 11 21:55:05.025: INFO: Verifying statefulset ss doesn't scale past 3 for another 248.64038ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-7643 May 11 21:55:06.351: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7643 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 11 21:55:06.633: INFO: stderr: "I0511 21:55:06.544479 2858 log.go:172] (0xc000afebb0) (0xc0005f9f40) Create stream\nI0511 21:55:06.544516 2858 log.go:172] (0xc000afebb0) (0xc0005f9f40) Stream added, broadcasting: 1\nI0511 21:55:06.546401 2858 log.go:172] (0xc000afebb0) Reply frame received for 1\nI0511 21:55:06.546447 2858 log.go:172] (0xc000afebb0) (0xc000ac80a0) Create stream\nI0511 21:55:06.546471 2858 log.go:172] (0xc000afebb0) (0xc000ac80a0) Stream added, broadcasting: 3\nI0511 21:55:06.547249 2858 log.go:172] (0xc000afebb0) Reply frame received for 3\nI0511 21:55:06.547275 2858 log.go:172] (0xc000afebb0) (0xc000ac8140) Create stream\nI0511 21:55:06.547287 2858 log.go:172] (0xc000afebb0) (0xc000ac8140) Stream added, broadcasting: 5\nI0511 21:55:06.548114 2858 log.go:172] (0xc000afebb0) Reply frame received for 5\nI0511 21:55:06.627818 2858 log.go:172] (0xc000afebb0) Data frame received for 3\nI0511 21:55:06.629551 2858 log.go:172] (0xc000ac80a0) (3) Data frame handling\nI0511 21:55:06.629621 2858 log.go:172] (0xc000ac80a0) (3) Data frame sent\nI0511 21:55:06.629637 2858 log.go:172] (0xc000afebb0) Data frame received for 3\nI0511 21:55:06.629646 2858 log.go:172] (0xc000ac80a0) (3) Data frame handling\nI0511 21:55:06.629686 2858 log.go:172] (0xc000afebb0) Data frame received for 5\nI0511 21:55:06.629718 2858 log.go:172] (0xc000ac8140) (5) Data frame handling\nI0511 21:55:06.629732 2858 log.go:172] (0xc000ac8140) (5) Data frame sent\nI0511 21:55:06.629744 2858 log.go:172] (0xc000afebb0) Data frame received for 5\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0511 21:55:06.629754 2858 log.go:172] (0xc000ac8140) (5) Data frame handling\nI0511 21:55:06.629860 2858 log.go:172] (0xc000afebb0) Data frame received for 1\nI0511 21:55:06.629886 2858 log.go:172] (0xc0005f9f40) (1) Data frame handling\nI0511 21:55:06.629902 2858 log.go:172] (0xc0005f9f40) (1) Data frame sent\nI0511 21:55:06.629928 2858 log.go:172] (0xc000afebb0) (0xc0005f9f40) Stream removed, broadcasting: 1\nI0511 21:55:06.629958 2858 log.go:172] (0xc000afebb0) Go away received\nI0511 21:55:06.630774 2858 log.go:172] (0xc000afebb0) (0xc0005f9f40) Stream removed, broadcasting: 1\nI0511 21:55:06.630787 2858 log.go:172] (0xc000afebb0) (0xc000ac80a0) Stream removed, broadcasting: 3\nI0511 21:55:06.630801 2858 log.go:172] (0xc000afebb0) (0xc000ac8140) Stream removed, broadcasting: 5\n" May 11 21:55:06.633: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 11 21:55:06.633: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 11 21:55:06.633: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7643 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 11 21:55:06.818: INFO: stderr: "I0511 21:55:06.757895 2878 log.go:172] (0xc000a149a0) (0xc00095e3c0) Create stream\nI0511 21:55:06.757944 2878 log.go:172] (0xc000a149a0) (0xc00095e3c0) Stream added, broadcasting: 1\nI0511 21:55:06.761495 2878 log.go:172] (0xc000a149a0) Reply frame received for 1\nI0511 21:55:06.761527 2878 log.go:172] (0xc000a149a0) (0xc0005dc780) Create stream\nI0511 21:55:06.761537 2878 log.go:172] (0xc000a149a0) (0xc0005dc780) Stream added, broadcasting: 3\nI0511 21:55:06.762350 2878 log.go:172] (0xc000a149a0) Reply frame received for 3\nI0511 21:55:06.762384 2878 log.go:172] (0xc000a149a0) (0xc000711540) Create stream\nI0511 21:55:06.762397 2878 log.go:172] (0xc000a149a0) (0xc000711540) Stream added, broadcasting: 5\nI0511 21:55:06.763471 2878 log.go:172] (0xc000a149a0) Reply frame received for 5\nI0511 21:55:06.814111 2878 log.go:172] (0xc000a149a0) Data frame received for 5\nI0511 21:55:06.814159 2878 log.go:172] (0xc000711540) (5) Data frame handling\nI0511 21:55:06.814186 2878 log.go:172] (0xc000711540) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0511 21:55:06.814218 2878 log.go:172] (0xc000a149a0) Data frame received for 3\nI0511 21:55:06.814238 2878 log.go:172] (0xc0005dc780) (3) Data frame handling\nI0511 21:55:06.814258 2878 log.go:172] (0xc000a149a0) Data frame received for 5\nI0511 21:55:06.814275 2878 log.go:172] (0xc000711540) (5) Data frame handling\nI0511 21:55:06.814305 2878 log.go:172] (0xc0005dc780) (3) Data frame sent\nI0511 21:55:06.814319 2878 log.go:172] (0xc000a149a0) Data frame received for 3\nI0511 21:55:06.814332 2878 log.go:172] (0xc0005dc780) (3) Data frame handling\nI0511 21:55:06.815443 2878 log.go:172] (0xc000a149a0) Data frame received for 1\nI0511 21:55:06.815479 2878 log.go:172] (0xc00095e3c0) (1) Data frame handling\nI0511 21:55:06.815495 2878 log.go:172] (0xc00095e3c0) (1) Data frame sent\nI0511 21:55:06.815570 2878 log.go:172] (0xc000a149a0) (0xc00095e3c0) Stream removed, broadcasting: 1\nI0511 21:55:06.815613 2878 log.go:172] (0xc000a149a0) Go away received\nI0511 21:55:06.815879 2878 log.go:172] (0xc000a149a0) (0xc00095e3c0) Stream removed, broadcasting: 1\nI0511 21:55:06.815891 2878 log.go:172] (0xc000a149a0) (0xc0005dc780) Stream removed, broadcasting: 3\nI0511 21:55:06.815897 2878 log.go:172] (0xc000a149a0) (0xc000711540) Stream removed, broadcasting: 5\n" May 11 21:55:06.818: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 11 21:55:06.818: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 11 21:55:06.818: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7643 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 11 21:55:06.992: INFO: stderr: "I0511 21:55:06.939617 2898 log.go:172] (0xc000a76000) (0xc0005b86e0) Create stream\nI0511 21:55:06.939671 2898 log.go:172] (0xc000a76000) (0xc0005b86e0) Stream added, broadcasting: 1\nI0511 21:55:06.941908 2898 log.go:172] (0xc000a76000) Reply frame received for 1\nI0511 21:55:06.941936 2898 log.go:172] (0xc000a76000) (0xc0007634a0) Create stream\nI0511 21:55:06.941946 2898 log.go:172] (0xc000a76000) (0xc0007634a0) Stream added, broadcasting: 3\nI0511 21:55:06.942614 2898 log.go:172] (0xc000a76000) Reply frame received for 3\nI0511 21:55:06.942642 2898 log.go:172] (0xc000a76000) (0xc000a9e140) Create stream\nI0511 21:55:06.942648 2898 log.go:172] (0xc000a76000) (0xc000a9e140) Stream added, broadcasting: 5\nI0511 21:55:06.943998 2898 log.go:172] (0xc000a76000) Reply frame received for 5\nI0511 21:55:06.986474 2898 log.go:172] (0xc000a76000) Data frame received for 5\nI0511 21:55:06.986501 2898 log.go:172] (0xc000a76000) Data frame received for 3\nI0511 21:55:06.986527 2898 log.go:172] (0xc0007634a0) (3) Data frame handling\nI0511 21:55:06.986545 2898 log.go:172] (0xc0007634a0) (3) Data frame sent\nI0511 21:55:06.986562 2898 log.go:172] (0xc000a76000) Data frame received for 3\nI0511 21:55:06.986570 2898 log.go:172] (0xc0007634a0) (3) Data frame handling\nI0511 21:55:06.986598 2898 log.go:172] (0xc000a9e140) (5) Data frame handling\nI0511 21:55:06.986608 2898 log.go:172] (0xc000a9e140) (5) Data frame sent\nI0511 21:55:06.986615 2898 log.go:172] (0xc000a76000) Data frame received for 5\nI0511 21:55:06.986622 2898 log.go:172] (0xc000a9e140) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0511 21:55:06.987590 2898 log.go:172] (0xc000a76000) Data frame received for 1\nI0511 21:55:06.987605 2898 log.go:172] (0xc0005b86e0) (1) Data frame handling\nI0511 21:55:06.987616 2898 log.go:172] (0xc0005b86e0) (1) Data frame sent\nI0511 21:55:06.987628 2898 log.go:172] (0xc000a76000) (0xc0005b86e0) Stream removed, broadcasting: 1\nI0511 21:55:06.987903 2898 log.go:172] (0xc000a76000) (0xc0005b86e0) Stream removed, broadcasting: 1\nI0511 21:55:06.987918 2898 log.go:172] (0xc000a76000) (0xc0007634a0) Stream removed, broadcasting: 3\nI0511 21:55:06.987928 2898 log.go:172] (0xc000a76000) (0xc000a9e140) Stream removed, broadcasting: 5\n" May 11 21:55:06.992: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 11 21:55:06.992: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 11 21:55:06.997: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 11 21:55:06.997: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 11 21:55:06.997: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod May 11 21:55:06.999: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7643 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 11 21:55:07.190: INFO: stderr: "I0511 21:55:07.121567 2918 log.go:172] (0xc0009a2000) (0xc0006166e0) Create stream\nI0511 21:55:07.121611 2918 log.go:172] (0xc0009a2000) (0xc0006166e0) Stream added, broadcasting: 1\nI0511 21:55:07.123797 2918 log.go:172] (0xc0009a2000) Reply frame received for 1\nI0511 21:55:07.123830 2918 log.go:172] (0xc0009a2000) (0xc0004d14a0) Create stream\nI0511 21:55:07.123841 2918 log.go:172] (0xc0009a2000) (0xc0004d14a0) Stream added, broadcasting: 3\nI0511 21:55:07.124535 2918 log.go:172] (0xc0009a2000) Reply frame received for 3\nI0511 21:55:07.124557 2918 log.go:172] (0xc0009a2000) (0xc000976000) Create stream\nI0511 21:55:07.124565 2918 log.go:172] (0xc0009a2000) (0xc000976000) Stream added, broadcasting: 5\nI0511 21:55:07.130885 2918 log.go:172] (0xc0009a2000) Reply frame received for 5\nI0511 21:55:07.184604 2918 log.go:172] (0xc0009a2000) Data frame received for 3\nI0511 21:55:07.184635 2918 log.go:172] (0xc0004d14a0) (3) Data frame handling\nI0511 21:55:07.184647 2918 log.go:172] (0xc0004d14a0) (3) Data frame sent\nI0511 21:55:07.184674 2918 log.go:172] (0xc0009a2000) Data frame received for 5\nI0511 21:55:07.184698 2918 log.go:172] (0xc000976000) (5) Data frame handling\nI0511 21:55:07.184715 2918 log.go:172] (0xc000976000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0511 21:55:07.184732 2918 log.go:172] (0xc0009a2000) Data frame received for 5\nI0511 21:55:07.184748 2918 log.go:172] (0xc000976000) (5) Data frame handling\nI0511 21:55:07.184770 2918 log.go:172] (0xc0009a2000) Data frame received for 3\nI0511 21:55:07.184795 2918 log.go:172] (0xc0004d14a0) (3) Data frame handling\nI0511 21:55:07.185981 2918 log.go:172] (0xc0009a2000) Data frame received for 1\nI0511 21:55:07.186021 2918 log.go:172] (0xc0006166e0) (1) Data frame handling\nI0511 21:55:07.186035 2918 log.go:172] (0xc0006166e0) (1) Data frame sent\nI0511 21:55:07.186049 2918 log.go:172] (0xc0009a2000) (0xc0006166e0) Stream removed, broadcasting: 1\nI0511 21:55:07.186067 2918 log.go:172] (0xc0009a2000) Go away received\nI0511 21:55:07.186324 2918 log.go:172] (0xc0009a2000) (0xc0006166e0) Stream removed, broadcasting: 1\nI0511 21:55:07.186341 2918 log.go:172] (0xc0009a2000) (0xc0004d14a0) Stream removed, broadcasting: 3\nI0511 21:55:07.186353 2918 log.go:172] (0xc0009a2000) (0xc000976000) Stream removed, broadcasting: 5\n" May 11 21:55:07.190: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 11 21:55:07.190: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 11 21:55:07.190: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7643 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 11 21:55:07.458: INFO: stderr: "I0511 21:55:07.298691 2938 log.go:172] (0xc0007ceb00) (0xc000661ea0) Create stream\nI0511 21:55:07.298751 2938 log.go:172] (0xc0007ceb00) (0xc000661ea0) Stream added, broadcasting: 1\nI0511 21:55:07.300670 2938 log.go:172] (0xc0007ceb00) Reply frame received for 1\nI0511 21:55:07.300703 2938 log.go:172] (0xc0007ceb00) (0xc000470780) Create stream\nI0511 21:55:07.300715 2938 log.go:172] (0xc0007ceb00) (0xc000470780) Stream added, broadcasting: 3\nI0511 21:55:07.301662 2938 log.go:172] (0xc0007ceb00) Reply frame received for 3\nI0511 21:55:07.301699 2938 log.go:172] (0xc0007ceb00) (0xc000661f40) Create stream\nI0511 21:55:07.301715 2938 log.go:172] (0xc0007ceb00) (0xc000661f40) Stream added, broadcasting: 5\nI0511 21:55:07.302430 2938 log.go:172] (0xc0007ceb00) Reply frame received for 5\nI0511 21:55:07.402890 2938 log.go:172] (0xc0007ceb00) Data frame received for 5\nI0511 21:55:07.402915 2938 log.go:172] (0xc000661f40) (5) Data frame handling\nI0511 21:55:07.402935 2938 log.go:172] (0xc000661f40) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0511 21:55:07.454909 2938 log.go:172] (0xc0007ceb00) Data frame received for 3\nI0511 21:55:07.454927 2938 log.go:172] (0xc000470780) (3) Data frame handling\nI0511 21:55:07.454937 2938 log.go:172] (0xc000470780) (3) Data frame sent\nI0511 21:55:07.455274 2938 log.go:172] (0xc0007ceb00) Data frame received for 5\nI0511 21:55:07.455283 2938 log.go:172] (0xc000661f40) (5) Data frame handling\nI0511 21:55:07.455320 2938 log.go:172] (0xc0007ceb00) Data frame received for 3\nI0511 21:55:07.455350 2938 log.go:172] (0xc000470780) (3) Data frame handling\nI0511 21:55:07.456257 2938 log.go:172] (0xc0007ceb00) Data frame received for 1\nI0511 21:55:07.456266 2938 log.go:172] (0xc000661ea0) (1) Data frame handling\nI0511 21:55:07.456282 2938 log.go:172] (0xc000661ea0) (1) Data frame sent\nI0511 21:55:07.456300 2938 log.go:172] (0xc0007ceb00) (0xc000661ea0) Stream removed, broadcasting: 1\nI0511 21:55:07.456488 2938 log.go:172] (0xc0007ceb00) Go away received\nI0511 21:55:07.456501 2938 log.go:172] (0xc0007ceb00) (0xc000661ea0) Stream removed, broadcasting: 1\nI0511 21:55:07.456512 2938 log.go:172] (0xc0007ceb00) (0xc000470780) Stream removed, broadcasting: 3\nI0511 21:55:07.456520 2938 log.go:172] (0xc0007ceb00) (0xc000661f40) Stream removed, broadcasting: 5\n" May 11 21:55:07.459: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 11 21:55:07.459: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 11 21:55:07.459: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7643 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 11 21:55:07.743: INFO: stderr: "I0511 21:55:07.626550 2958 log.go:172] (0xc0000ec370) (0xc0006afcc0) Create stream\nI0511 21:55:07.626606 2958 log.go:172] (0xc0000ec370) (0xc0006afcc0) Stream added, broadcasting: 1\nI0511 21:55:07.628764 2958 log.go:172] (0xc0000ec370) Reply frame received for 1\nI0511 21:55:07.628806 2958 log.go:172] (0xc0000ec370) (0xc0008f6000) Create stream\nI0511 21:55:07.628820 2958 log.go:172] (0xc0000ec370) (0xc0008f6000) Stream added, broadcasting: 3\nI0511 21:55:07.629792 2958 log.go:172] (0xc0000ec370) Reply frame received for 3\nI0511 21:55:07.629830 2958 log.go:172] (0xc0000ec370) (0xc0006afd60) Create stream\nI0511 21:55:07.629845 2958 log.go:172] (0xc0000ec370) (0xc0006afd60) Stream added, broadcasting: 5\nI0511 21:55:07.630695 2958 log.go:172] (0xc0000ec370) Reply frame received for 5\nI0511 21:55:07.687687 2958 log.go:172] (0xc0000ec370) Data frame received for 5\nI0511 21:55:07.687715 2958 log.go:172] (0xc0006afd60) (5) Data frame handling\nI0511 21:55:07.687738 2958 log.go:172] (0xc0006afd60) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0511 21:55:07.737745 2958 log.go:172] (0xc0000ec370) Data frame received for 5\nI0511 21:55:07.737772 2958 log.go:172] (0xc0006afd60) (5) Data frame handling\nI0511 21:55:07.737788 2958 log.go:172] (0xc0000ec370) Data frame received for 3\nI0511 21:55:07.737795 2958 log.go:172] (0xc0008f6000) (3) Data frame handling\nI0511 21:55:07.737803 2958 log.go:172] (0xc0008f6000) (3) Data frame sent\nI0511 21:55:07.737809 2958 log.go:172] (0xc0000ec370) Data frame received for 3\nI0511 21:55:07.737814 2958 log.go:172] (0xc0008f6000) (3) Data frame handling\nI0511 21:55:07.739201 2958 log.go:172] (0xc0000ec370) Data frame received for 1\nI0511 21:55:07.739222 2958 log.go:172] (0xc0006afcc0) (1) Data frame handling\nI0511 21:55:07.739240 2958 log.go:172] (0xc0006afcc0) (1) Data frame sent\nI0511 21:55:07.739252 2958 log.go:172] (0xc0000ec370) (0xc0006afcc0) Stream removed, broadcasting: 1\nI0511 21:55:07.739266 2958 log.go:172] (0xc0000ec370) Go away received\nI0511 21:55:07.739582 2958 log.go:172] (0xc0000ec370) (0xc0006afcc0) Stream removed, broadcasting: 1\nI0511 21:55:07.739601 2958 log.go:172] (0xc0000ec370) (0xc0008f6000) Stream removed, broadcasting: 3\nI0511 21:55:07.739610 2958 log.go:172] (0xc0000ec370) (0xc0006afd60) Stream removed, broadcasting: 5\n" May 11 21:55:07.743: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 11 21:55:07.743: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 11 21:55:07.743: INFO: Waiting for statefulset status.replicas updated to 0 May 11 21:55:07.764: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 May 11 21:55:17.984: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 11 21:55:17.984: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 11 21:55:17.984: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 11 21:55:18.734: INFO: POD NODE PHASE GRACE CONDITIONS May 11 21:55:18.734: INFO: ss-0 jerma-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:54:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:55:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:55:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:54:34 +0000 UTC }] May 11 21:55:18.734: INFO: ss-1 jerma-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:54:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:55:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:55:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:54:55 +0000 UTC }] May 11 21:55:18.734: INFO: ss-2 jerma-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:54:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:55:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:55:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:54:55 +0000 UTC }] May 11 21:55:18.734: INFO: May 11 21:55:18.734: INFO: StatefulSet ss has not reached scale 0, at 3 May 11 21:55:19.958: INFO: POD NODE PHASE GRACE CONDITIONS May 11 21:55:19.958: INFO: ss-0 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:54:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:55:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:55:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:54:34 +0000 UTC }] May 11 21:55:19.958: INFO: ss-1 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:54:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:55:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:55:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:54:55 +0000 UTC }] May 11 21:55:19.958: INFO: ss-2 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:54:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:55:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:55:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:54:55 +0000 UTC }] May 11 21:55:19.958: INFO: May 11 21:55:19.958: INFO: StatefulSet ss has not reached scale 0, at 3 May 11 21:55:20.963: INFO: POD NODE PHASE GRACE CONDITIONS May 11 21:55:20.963: INFO: ss-0 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:54:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:55:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:55:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:54:34 +0000 UTC }] May 11 21:55:20.963: INFO: ss-1 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:54:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:55:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:55:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:54:55 +0000 UTC }] May 11 21:55:20.963: INFO: ss-2 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:54:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:55:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:55:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:54:55 +0000 UTC }] May 11 21:55:20.963: INFO: May 11 21:55:20.963: INFO: StatefulSet ss has not reached scale 0, at 3 May 11 21:55:22.144: INFO: POD NODE PHASE GRACE CONDITIONS May 11 21:55:22.144: INFO: ss-0 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:54:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:55:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:55:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:54:34 +0000 UTC }] May 11 21:55:22.144: INFO: ss-1 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:54:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:55:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:55:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:54:55 +0000 UTC }] May 11 21:55:22.144: INFO: ss-2 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:54:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:55:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:55:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:54:55 +0000 UTC }] May 11 21:55:22.144: INFO: May 11 21:55:22.144: INFO: StatefulSet ss has not reached scale 0, at 3 May 11 21:55:23.149: INFO: POD NODE PHASE GRACE CONDITIONS May 11 21:55:23.149: INFO: ss-0 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:54:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:55:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:55:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:54:34 +0000 UTC }] May 11 21:55:23.149: INFO: ss-1 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:54:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:55:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:55:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:54:55 +0000 UTC }] May 11 21:55:23.149: INFO: ss-2 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:54:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:55:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:55:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:54:55 +0000 UTC }] May 11 21:55:23.149: INFO: May 11 21:55:23.149: INFO: StatefulSet ss has not reached scale 0, at 3 May 11 21:55:24.154: INFO: POD NODE PHASE GRACE CONDITIONS May 11 21:55:24.154: INFO: ss-0 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:54:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:55:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:55:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:54:34 +0000 UTC }] May 11 21:55:24.154: INFO: ss-1 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:54:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:55:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:55:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:54:55 +0000 UTC }] May 11 21:55:24.154: INFO: ss-2 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:54:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:55:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:55:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:54:55 +0000 UTC }] May 11 21:55:24.154: INFO: May 11 21:55:24.154: INFO: StatefulSet ss has not reached scale 0, at 3 May 11 21:55:25.198: INFO: POD NODE PHASE GRACE CONDITIONS May 11 21:55:25.198: INFO: ss-0 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:54:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:55:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:55:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:54:34 +0000 UTC }] May 11 21:55:25.198: INFO: ss-1 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:54:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:55:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:55:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:54:55 +0000 UTC }] May 11 21:55:25.198: INFO: ss-2 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:54:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:55:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:55:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:54:55 +0000 UTC }] May 11 21:55:25.198: INFO: May 11 21:55:25.198: INFO: StatefulSet ss has not reached scale 0, at 3 May 11 21:55:26.202: INFO: POD NODE PHASE GRACE CONDITIONS May 11 21:55:26.202: INFO: ss-0 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:54:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:55:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:55:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:54:34 +0000 UTC }] May 11 21:55:26.202: INFO: ss-1 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:54:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:55:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:55:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:54:55 +0000 UTC }] May 11 21:55:26.202: INFO: ss-2 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:54:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:55:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:55:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:54:55 +0000 UTC }] May 11 21:55:26.202: INFO: May 11 21:55:26.202: INFO: StatefulSet ss has not reached scale 0, at 3 May 11 21:55:27.206: INFO: POD NODE PHASE GRACE CONDITIONS May 11 21:55:27.206: INFO: ss-0 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:54:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:55:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:55:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:54:34 +0000 UTC }] May 11 21:55:27.206: INFO: ss-1 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:54:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:55:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:55:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:54:55 +0000 UTC }] May 11 21:55:27.206: INFO: ss-2 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:54:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:55:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:55:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:54:55 +0000 UTC }] May 11 21:55:27.206: INFO: May 11 21:55:27.206: INFO: StatefulSet ss has not reached scale 0, at 3 May 11 21:55:28.209: INFO: POD NODE PHASE GRACE CONDITIONS May 11 21:55:28.209: INFO: ss-0 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:54:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:55:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:55:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:54:34 +0000 UTC }] May 11 21:55:28.209: INFO: ss-1 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:54:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:55:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:55:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:54:55 +0000 UTC }] May 11 21:55:28.210: INFO: ss-2 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:54:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:55:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:55:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-11 21:54:55 +0000 UTC }] May 11 21:55:28.210: INFO: May 11 21:55:28.210: INFO: StatefulSet ss has not reached scale 0, at 3 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-7643 May 11 21:55:29.333: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7643 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 11 21:55:29.621: INFO: rc: 1 May 11 21:55:29.621: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7643 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: error: unable to upgrade connection: container not found ("webserver") error: exit status 1 May 11 21:55:39.621: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7643 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 11 21:55:39.719: INFO: rc: 1 May 11 21:55:39.719: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7643 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 11 21:55:49.719: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7643 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 11 21:55:49.824: INFO: rc: 1 May 11 21:55:49.824: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7643 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 11 21:55:59.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7643 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 11 21:55:59.969: INFO: rc: 1 May 11 21:55:59.969: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7643 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 11 21:56:09.969: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7643 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 11 21:56:10.064: INFO: rc: 1 May 11 21:56:10.064: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7643 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 11 21:56:20.064: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7643 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 11 21:56:20.175: INFO: rc: 1 May 11 21:56:20.175: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7643 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 11 21:56:30.175: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7643 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 11 21:56:30.274: INFO: rc: 1 May 11 21:56:30.274: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7643 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 11 21:56:40.275: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7643 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 11 21:56:40.521: INFO: rc: 1 May 11 21:56:40.521: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7643 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 11 21:56:50.522: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7643 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 11 21:56:50.646: INFO: rc: 1 May 11 21:56:50.646: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7643 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 11 21:57:00.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7643 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 11 21:57:00.738: INFO: rc: 1 May 11 21:57:00.738: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7643 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 11 21:57:10.738: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7643 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 11 21:57:10.860: INFO: rc: 1 May 11 21:57:10.860: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7643 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 11 21:57:20.860: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7643 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 11 21:57:20.979: INFO: rc: 1 May 11 21:57:20.979: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7643 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 11 21:57:30.979: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7643 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 11 21:57:31.566: INFO: rc: 1 May 11 21:57:31.566: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7643 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 11 21:57:41.566: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7643 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 11 21:57:41.666: INFO: rc: 1 May 11 21:57:41.666: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7643 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 11 21:57:51.666: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7643 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 11 21:57:51.887: INFO: rc: 1 May 11 21:57:51.887: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7643 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 11 21:58:01.887: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7643 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 11 21:58:02.078: INFO: rc: 1 May 11 21:58:02.078: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7643 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 11 21:58:12.078: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7643 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 11 21:58:12.177: INFO: rc: 1 May 11 21:58:12.177: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7643 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 11 21:58:22.177: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7643 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 11 21:58:22.272: INFO: rc: 1 May 11 21:58:22.272: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7643 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 11 21:58:32.272: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7643 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 11 21:58:32.502: INFO: rc: 1 May 11 21:58:32.502: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7643 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 11 21:58:42.502: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7643 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 11 21:58:42.600: INFO: rc: 1 May 11 21:58:42.600: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7643 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 11 21:58:52.600: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7643 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 11 21:58:52.704: INFO: rc: 1 May 11 21:58:52.704: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7643 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 11 21:59:02.704: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7643 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 11 21:59:02.792: INFO: rc: 1 May 11 21:59:02.792: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7643 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 11 21:59:12.792: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7643 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 11 21:59:12.884: INFO: rc: 1 May 11 21:59:12.884: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7643 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 11 21:59:22.884: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7643 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 11 21:59:22.974: INFO: rc: 1 May 11 21:59:22.975: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7643 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 11 21:59:32.975: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7643 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 11 21:59:33.060: INFO: rc: 1 May 11 21:59:33.060: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7643 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 11 21:59:43.060: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7643 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 11 21:59:43.605: INFO: rc: 1 May 11 21:59:43.605: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7643 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 11 21:59:53.605: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7643 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 11 21:59:53.697: INFO: rc: 1 May 11 21:59:53.697: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7643 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 11 22:00:03.698: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7643 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 11 22:00:03.792: INFO: rc: 1 May 11 22:00:03.792: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7643 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 11 22:00:13.793: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7643 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 11 22:00:14.034: INFO: rc: 1 May 11 22:00:14.034: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7643 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 11 22:00:24.034: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7643 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 11 22:00:24.131: INFO: rc: 1 May 11 22:00:24.131: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7643 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 11 22:00:34.131: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7643 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 11 22:00:34.226: INFO: rc: 1 May 11 22:00:34.226: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: May 11 22:00:34.226: INFO: Scaling statefulset ss to 0 May 11 22:00:34.233: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 May 11 22:00:34.235: INFO: Deleting all statefulset in ns statefulset-7643 May 11 22:00:34.240: INFO: Scaling statefulset ss to 0 May 11 22:00:34.245: INFO: Waiting for statefulset status.replicas updated to 0 May 11 22:00:34.246: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 22:00:34.374: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-7643" for this suite. • [SLOW TEST:359.787 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":278,"completed":221,"skipped":3620,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 22:00:34.380: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 11 22:00:34.547: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1739' May 11 22:00:35.041: INFO: stderr: "" May 11 22:00:35.041: INFO: stdout: "replicationcontroller/agnhost-master created\n" May 11 22:00:35.041: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1739' May 11 22:00:35.348: INFO: stderr: "" May 11 22:00:35.348: INFO: stdout: "service/agnhost-master created\n" STEP: Waiting for Agnhost master to start. May 11 22:00:36.352: INFO: Selector matched 1 pods for map[app:agnhost] May 11 22:00:36.352: INFO: Found 0 / 1 May 11 22:00:37.487: INFO: Selector matched 1 pods for map[app:agnhost] May 11 22:00:37.487: INFO: Found 0 / 1 May 11 22:00:38.351: INFO: Selector matched 1 pods for map[app:agnhost] May 11 22:00:38.351: INFO: Found 0 / 1 May 11 22:00:39.379: INFO: Selector matched 1 pods for map[app:agnhost] May 11 22:00:39.379: INFO: Found 0 / 1 May 11 22:00:40.351: INFO: Selector matched 1 pods for map[app:agnhost] May 11 22:00:40.351: INFO: Found 0 / 1 May 11 22:00:41.352: INFO: Selector matched 1 pods for map[app:agnhost] May 11 22:00:41.352: INFO: Found 1 / 1 May 11 22:00:41.352: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 11 22:00:41.355: INFO: Selector matched 1 pods for map[app:agnhost] May 11 22:00:41.355: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 11 22:00:41.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod agnhost-master-m2xzn --namespace=kubectl-1739' May 11 22:00:41.474: INFO: stderr: "" May 11 22:00:41.474: INFO: stdout: "Name: agnhost-master-m2xzn\nNamespace: kubectl-1739\nPriority: 0\nNode: jerma-worker2/172.17.0.8\nStart Time: Mon, 11 May 2020 22:00:35 +0000\nLabels: app=agnhost\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.2.180\nIPs:\n IP: 10.244.2.180\nControlled By: ReplicationController/agnhost-master\nContainers:\n agnhost-master:\n Container ID: containerd://088d01b35abd313200cb94d90abcc3768171bc9323510d29b0b7f29d943ec742\n Image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n Image ID: gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Mon, 11 May 2020 22:00:39 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-pnm2b (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-pnm2b:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-pnm2b\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 6s default-scheduler Successfully assigned kubectl-1739/agnhost-master-m2xzn to jerma-worker2\n Normal Pulled 4s kubelet, jerma-worker2 Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\n Normal Created 2s kubelet, jerma-worker2 Created container agnhost-master\n Normal Started 1s kubelet, jerma-worker2 Started container agnhost-master\n" May 11 22:00:41.474: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-1739' May 11 22:00:41.594: INFO: stderr: "" May 11 22:00:41.594: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-1739\nSelector: app=agnhost,role=master\nLabels: app=agnhost\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=master\n Containers:\n agnhost-master:\n Image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 6s replication-controller Created pod: agnhost-master-m2xzn\n" May 11 22:00:41.594: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-1739' May 11 22:00:41.870: INFO: stderr: "" May 11 22:00:41.871: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-1739\nLabels: app=agnhost\n role=master\nAnnotations: \nSelector: app=agnhost,role=master\nType: ClusterIP\nIP: 10.108.49.241\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.244.2.180:6379\nSession Affinity: None\nEvents: \n" May 11 22:00:41.874: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node jerma-control-plane' May 11 22:00:42.003: INFO: stderr: "" May 11 22:00:42.003: INFO: stdout: "Name: jerma-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=jerma-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 15 Mar 2020 18:25:55 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: jerma-control-plane\n AcquireTime: \n RenewTime: Mon, 11 May 2020 22:00:37 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Mon, 11 May 2020 21:59:01 +0000 Sun, 15 Mar 2020 18:25:55 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Mon, 11 May 2020 21:59:01 +0000 Sun, 15 Mar 2020 18:25:55 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Mon, 11 May 2020 21:59:01 +0000 Sun, 15 Mar 2020 18:25:55 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Mon, 11 May 2020 21:59:01 +0000 Sun, 15 Mar 2020 18:26:27 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.9\n Hostname: jerma-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nSystem Info:\n Machine ID: 3bcfb16fe77247d3af07bed975350d5c\n System UUID: 947a2db5-5527-4203-8af5-13d97ffe8a80\n Boot ID: ca2aa731-f890-4956-92a1-ff8c7560d571\n Kernel Version: 4.15.0-88-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.2-31-gaa877d78\n Kubelet Version: v1.17.2\n Kube-Proxy Version: v1.17.2\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nNon-terminated Pods: (9 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system coredns-6955765f44-rll5s 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 57d\n kube-system coredns-6955765f44-svxk5 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 57d\n kube-system etcd-jerma-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 57d\n kube-system kindnet-bjddj 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 57d\n kube-system kube-apiserver-jerma-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 57d\n kube-system kube-controller-manager-jerma-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 57d\n kube-system kube-proxy-mm9zd 0 (0%) 0 (0%) 0 (0%) 0 (0%) 57d\n kube-system kube-scheduler-jerma-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 57d\n local-path-storage local-path-provisioner-85445b74d4-7mg5w 0 (0%) 0 (0%) 0 (0%) 0 (0%) 57d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 850m (5%) 100m (0%)\n memory 190Mi (0%) 390Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" May 11 22:00:42.003: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-1739' May 11 22:00:42.118: INFO: stderr: "" May 11 22:00:42.118: INFO: stdout: "Name: kubectl-1739\nLabels: e2e-framework=kubectl\n e2e-run=06fb8866-34ad-4a7a-a109-89878b41b6c2\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 22:00:42.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1739" for this suite. • [SLOW TEST:7.748 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1047 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":278,"completed":222,"skipped":3641,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 22:00:42.128: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 11 22:00:42.284: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 22:00:42.296: INFO: Number of nodes with available pods: 0 May 11 22:00:42.296: INFO: Node jerma-worker is running more than one daemon pod May 11 22:00:43.301: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 22:00:43.304: INFO: Number of nodes with available pods: 0 May 11 22:00:43.304: INFO: Node jerma-worker is running more than one daemon pod May 11 22:00:44.566: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 22:00:44.570: INFO: Number of nodes with available pods: 0 May 11 22:00:44.570: INFO: Node jerma-worker is running more than one daemon pod May 11 22:00:45.392: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 22:00:45.396: INFO: Number of nodes with available pods: 0 May 11 22:00:45.396: INFO: Node jerma-worker is running more than one daemon pod May 11 22:00:46.518: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 22:00:46.521: INFO: Number of nodes with available pods: 0 May 11 22:00:46.521: INFO: Node jerma-worker is running more than one daemon pod May 11 22:00:47.312: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 22:00:47.317: INFO: Number of nodes with available pods: 1 May 11 22:00:47.317: INFO: Node jerma-worker2 is running more than one daemon pod May 11 22:00:48.339: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 22:00:48.343: INFO: Number of nodes with available pods: 2 May 11 22:00:48.343: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. May 11 22:00:48.428: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 22:00:48.649: INFO: Number of nodes with available pods: 2 May 11 22:00:48.649: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3375, will wait for the garbage collector to delete the pods May 11 22:00:50.467: INFO: Deleting DaemonSet.extensions daemon-set took: 217.193428ms May 11 22:00:50.667: INFO: Terminating DaemonSet.extensions daemon-set pods took: 200.26029ms May 11 22:00:55.670: INFO: Number of nodes with available pods: 0 May 11 22:00:55.670: INFO: Number of running nodes: 0, number of available pods: 0 May 11 22:00:55.673: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3375/daemonsets","resourceVersion":"15368334"},"items":null} May 11 22:00:55.675: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3375/pods","resourceVersion":"15368334"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 22:00:55.684: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-3375" for this suite. • [SLOW TEST:13.618 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":278,"completed":223,"skipped":3664,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 22:00:55.747: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 11 22:00:56.334: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0bb13c49-8fe7-4972-a740-6174d194f326" in namespace "downward-api-458" to be "success or failure" May 11 22:00:56.375: INFO: Pod "downwardapi-volume-0bb13c49-8fe7-4972-a740-6174d194f326": Phase="Pending", Reason="", readiness=false. Elapsed: 41.002562ms May 11 22:00:58.823: INFO: Pod "downwardapi-volume-0bb13c49-8fe7-4972-a740-6174d194f326": Phase="Pending", Reason="", readiness=false. Elapsed: 2.489093131s May 11 22:01:01.085: INFO: Pod "downwardapi-volume-0bb13c49-8fe7-4972-a740-6174d194f326": Phase="Pending", Reason="", readiness=false. Elapsed: 4.750932798s May 11 22:01:03.586: INFO: Pod "downwardapi-volume-0bb13c49-8fe7-4972-a740-6174d194f326": Phase="Pending", Reason="", readiness=false. Elapsed: 7.252236982s May 11 22:01:05.601: INFO: Pod "downwardapi-volume-0bb13c49-8fe7-4972-a740-6174d194f326": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.26741204s STEP: Saw pod success May 11 22:01:05.601: INFO: Pod "downwardapi-volume-0bb13c49-8fe7-4972-a740-6174d194f326" satisfied condition "success or failure" May 11 22:01:05.625: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-0bb13c49-8fe7-4972-a740-6174d194f326 container client-container: STEP: delete the pod May 11 22:01:05.871: INFO: Waiting for pod downwardapi-volume-0bb13c49-8fe7-4972-a740-6174d194f326 to disappear May 11 22:01:05.946: INFO: Pod downwardapi-volume-0bb13c49-8fe7-4972-a740-6174d194f326 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 22:01:05.946: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-458" for this suite. • [SLOW TEST:10.214 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":224,"skipped":3676,"failed":0} SSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 22:01:05.961: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service endpoint-test2 in namespace services-4836 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4836 to expose endpoints map[] May 11 22:01:06.687: INFO: Get endpoints failed (2.299655ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found May 11 22:01:07.745: INFO: successfully validated that service endpoint-test2 in namespace services-4836 exposes endpoints map[] (1.060198727s elapsed) STEP: Creating pod pod1 in namespace services-4836 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4836 to expose endpoints map[pod1:[80]] May 11 22:01:13.098: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (4.778195045s elapsed, will retry) May 11 22:01:14.105: INFO: successfully validated that service endpoint-test2 in namespace services-4836 exposes endpoints map[pod1:[80]] (5.785530477s elapsed) STEP: Creating pod pod2 in namespace services-4836 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4836 to expose endpoints map[pod1:[80] pod2:[80]] May 11 22:01:19.043: INFO: Unexpected endpoints: found map[7f2b1108-37e8-4520-b329-3ffeea9656d1:[80]], expected map[pod1:[80] pod2:[80]] (4.933552827s elapsed, will retry) May 11 22:01:20.106: INFO: successfully validated that service endpoint-test2 in namespace services-4836 exposes endpoints map[pod1:[80] pod2:[80]] (5.996630559s elapsed) STEP: Deleting pod pod1 in namespace services-4836 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4836 to expose endpoints map[pod2:[80]] May 11 22:01:21.530: INFO: successfully validated that service endpoint-test2 in namespace services-4836 exposes endpoints map[pod2:[80]] (1.418452879s elapsed) STEP: Deleting pod pod2 in namespace services-4836 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4836 to expose endpoints map[] May 11 22:01:22.799: INFO: successfully validated that service endpoint-test2 in namespace services-4836 exposes endpoints map[] (1.263914515s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 22:01:23.187: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4836" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:17.322 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":278,"completed":225,"skipped":3680,"failed":0} SSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 22:01:23.283: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0511 22:01:54.254377 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 11 22:01:54.254: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 22:01:54.254: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-2882" for this suite. • [SLOW TEST:30.979 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":278,"completed":226,"skipped":3684,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 22:01:54.262: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-86fe1dc0-7c89-4d86-9a10-ad9d28514ca8 STEP: Creating a pod to test consume secrets May 11 22:01:54.578: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-4abff32e-7b20-484c-822b-0a8da3e6b52b" in namespace "projected-8333" to be "success or failure" May 11 22:01:54.642: INFO: Pod "pod-projected-secrets-4abff32e-7b20-484c-822b-0a8da3e6b52b": Phase="Pending", Reason="", readiness=false. Elapsed: 63.832658ms May 11 22:01:56.645: INFO: Pod "pod-projected-secrets-4abff32e-7b20-484c-822b-0a8da3e6b52b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.066993459s May 11 22:01:58.655: INFO: Pod "pod-projected-secrets-4abff32e-7b20-484c-822b-0a8da3e6b52b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.076948737s May 11 22:02:00.706: INFO: Pod "pod-projected-secrets-4abff32e-7b20-484c-822b-0a8da3e6b52b": Phase="Running", Reason="", readiness=true. Elapsed: 6.127754752s May 11 22:02:02.949: INFO: Pod "pod-projected-secrets-4abff32e-7b20-484c-822b-0a8da3e6b52b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.370881952s STEP: Saw pod success May 11 22:02:02.949: INFO: Pod "pod-projected-secrets-4abff32e-7b20-484c-822b-0a8da3e6b52b" satisfied condition "success or failure" May 11 22:02:02.952: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-4abff32e-7b20-484c-822b-0a8da3e6b52b container projected-secret-volume-test: STEP: delete the pod May 11 22:02:03.016: INFO: Waiting for pod pod-projected-secrets-4abff32e-7b20-484c-822b-0a8da3e6b52b to disappear May 11 22:02:03.068: INFO: Pod pod-projected-secrets-4abff32e-7b20-484c-822b-0a8da3e6b52b no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 22:02:03.068: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8333" for this suite. • [SLOW TEST:8.812 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":227,"skipped":3700,"failed":0} S ------------------------------ [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 22:02:03.075: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1626 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 11 22:02:03.608: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --generator=deployment/apps.v1 --namespace=kubectl-7862' May 11 22:02:03.721: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 11 22:02:03.721: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n" STEP: verifying the deployment e2e-test-httpd-deployment was created STEP: verifying the pod controlled by deployment e2e-test-httpd-deployment was created [AfterEach] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1631 May 11 22:02:07.756: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-7862' May 11 22:02:07.864: INFO: stderr: "" May 11 22:02:07.864: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 22:02:07.864: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7862" for this suite. • [SLOW TEST:5.138 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1622 should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image [Conformance]","total":278,"completed":228,"skipped":3701,"failed":0} [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 22:02:08.213: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 11 22:02:10.292: INFO: (0) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 5.637096ms) May 11 22:02:10.295: INFO: (1) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 2.824839ms) May 11 22:02:10.299: INFO: (2) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 4.377705ms) May 11 22:02:10.302: INFO: (3) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.156989ms) May 11 22:02:10.609: INFO: (4) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 307.012217ms) May 11 22:02:10.681: INFO: (5) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 71.864861ms) May 11 22:02:10.939: INFO: (6) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 258.07503ms) May 11 22:02:10.975: INFO: (7) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 35.358883ms) May 11 22:02:10.980: INFO: (8) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 5.304156ms) May 11 22:02:10.986: INFO: (9) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 5.591291ms) May 11 22:02:10.989: INFO: (10) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.183174ms) May 11 22:02:10.992: INFO: (11) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 2.869766ms) May 11 22:02:10.995: INFO: (12) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 2.997748ms) May 11 22:02:10.999: INFO: (13) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.685322ms) May 11 22:02:11.002: INFO: (14) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.084269ms) May 11 22:02:11.005: INFO: (15) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.206378ms) May 11 22:02:11.008: INFO: (16) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 2.843797ms) May 11 22:02:11.010: INFO: (17) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 2.494643ms) May 11 22:02:11.013: INFO: (18) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 2.355352ms) May 11 22:02:11.015: INFO: (19) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 2.582382ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 22:02:11.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-4948" for this suite. •{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance]","total":278,"completed":229,"skipped":3701,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 22:02:11.022: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 11 22:02:11.167: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 22:02:11.744: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-9651" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":278,"completed":230,"skipped":3719,"failed":0} SSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 22:02:11.788: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: getting the auto-created API token May 11 22:02:12.371: INFO: created pod pod-service-account-defaultsa May 11 22:02:12.371: INFO: pod pod-service-account-defaultsa service account token volume mount: true May 11 22:02:12.390: INFO: created pod pod-service-account-mountsa May 11 22:02:12.390: INFO: pod pod-service-account-mountsa service account token volume mount: true May 11 22:02:12.451: INFO: created pod pod-service-account-nomountsa May 11 22:02:12.451: INFO: pod pod-service-account-nomountsa service account token volume mount: false May 11 22:02:12.486: INFO: created pod pod-service-account-defaultsa-mountspec May 11 22:02:12.486: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true May 11 22:02:12.500: INFO: created pod pod-service-account-mountsa-mountspec May 11 22:02:12.500: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true May 11 22:02:12.566: INFO: created pod pod-service-account-nomountsa-mountspec May 11 22:02:12.566: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true May 11 22:02:12.710: INFO: created pod pod-service-account-defaultsa-nomountspec May 11 22:02:12.710: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false May 11 22:02:12.757: INFO: created pod pod-service-account-mountsa-nomountspec May 11 22:02:12.757: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false May 11 22:02:12.890: INFO: created pod pod-service-account-nomountsa-nomountspec May 11 22:02:12.890: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 22:02:12.891: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-2283" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":278,"completed":231,"skipped":3722,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 22:02:13.071: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-8168 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a new StatefulSet May 11 22:02:14.032: INFO: Found 0 stateful pods, waiting for 3 May 11 22:02:24.484: INFO: Found 1 stateful pods, waiting for 3 May 11 22:02:34.974: INFO: Found 1 stateful pods, waiting for 3 May 11 22:02:44.376: INFO: Found 2 stateful pods, waiting for 3 May 11 22:02:54.039: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 11 22:02:54.039: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 11 22:02:54.039: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true May 11 22:02:54.112: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8168 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 11 22:03:04.720: INFO: stderr: "I0511 22:03:04.436527 3755 log.go:172] (0xc0001193f0) (0xc000685d60) Create stream\nI0511 22:03:04.436576 3755 log.go:172] (0xc0001193f0) (0xc000685d60) Stream added, broadcasting: 1\nI0511 22:03:04.439073 3755 log.go:172] (0xc0001193f0) Reply frame received for 1\nI0511 22:03:04.439117 3755 log.go:172] (0xc0001193f0) (0xc0006245a0) Create stream\nI0511 22:03:04.439126 3755 log.go:172] (0xc0001193f0) (0xc0006245a0) Stream added, broadcasting: 3\nI0511 22:03:04.439932 3755 log.go:172] (0xc0001193f0) Reply frame received for 3\nI0511 22:03:04.439963 3755 log.go:172] (0xc0001193f0) (0xc0002bb360) Create stream\nI0511 22:03:04.439972 3755 log.go:172] (0xc0001193f0) (0xc0002bb360) Stream added, broadcasting: 5\nI0511 22:03:04.440687 3755 log.go:172] (0xc0001193f0) Reply frame received for 5\nI0511 22:03:04.528398 3755 log.go:172] (0xc0001193f0) Data frame received for 5\nI0511 22:03:04.528422 3755 log.go:172] (0xc0002bb360) (5) Data frame handling\nI0511 22:03:04.528439 3755 log.go:172] (0xc0002bb360) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0511 22:03:04.711629 3755 log.go:172] (0xc0001193f0) Data frame received for 3\nI0511 22:03:04.711659 3755 log.go:172] (0xc0006245a0) (3) Data frame handling\nI0511 22:03:04.711676 3755 log.go:172] (0xc0006245a0) (3) Data frame sent\nI0511 22:03:04.711872 3755 log.go:172] (0xc0001193f0) Data frame received for 5\nI0511 22:03:04.711894 3755 log.go:172] (0xc0002bb360) (5) Data frame handling\nI0511 22:03:04.712089 3755 log.go:172] (0xc0001193f0) Data frame received for 3\nI0511 22:03:04.712131 3755 log.go:172] (0xc0006245a0) (3) Data frame handling\nI0511 22:03:04.715025 3755 log.go:172] (0xc0001193f0) Data frame received for 1\nI0511 22:03:04.715049 3755 log.go:172] (0xc000685d60) (1) Data frame handling\nI0511 22:03:04.715061 3755 log.go:172] (0xc000685d60) (1) Data frame sent\nI0511 22:03:04.715073 3755 log.go:172] (0xc0001193f0) (0xc000685d60) Stream removed, broadcasting: 1\nI0511 22:03:04.715107 3755 log.go:172] (0xc0001193f0) Go away received\nI0511 22:03:04.715372 3755 log.go:172] (0xc0001193f0) (0xc000685d60) Stream removed, broadcasting: 1\nI0511 22:03:04.715384 3755 log.go:172] (0xc0001193f0) (0xc0006245a0) Stream removed, broadcasting: 3\nI0511 22:03:04.715390 3755 log.go:172] (0xc0001193f0) (0xc0002bb360) Stream removed, broadcasting: 5\n" May 11 22:03:04.721: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 11 22:03:04.721: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine May 11 22:03:15.112: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order May 11 22:03:25.228: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8168 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 11 22:03:25.437: INFO: stderr: "I0511 22:03:25.359700 3776 log.go:172] (0xc000a109a0) (0xc0009fa320) Create stream\nI0511 22:03:25.359747 3776 log.go:172] (0xc000a109a0) (0xc0009fa320) Stream added, broadcasting: 1\nI0511 22:03:25.361653 3776 log.go:172] (0xc000a109a0) Reply frame received for 1\nI0511 22:03:25.361720 3776 log.go:172] (0xc000a109a0) (0xc0009e4140) Create stream\nI0511 22:03:25.361744 3776 log.go:172] (0xc000a109a0) (0xc0009e4140) Stream added, broadcasting: 3\nI0511 22:03:25.362404 3776 log.go:172] (0xc000a109a0) Reply frame received for 3\nI0511 22:03:25.362432 3776 log.go:172] (0xc000a109a0) (0xc0009e41e0) Create stream\nI0511 22:03:25.362440 3776 log.go:172] (0xc000a109a0) (0xc0009e41e0) Stream added, broadcasting: 5\nI0511 22:03:25.363166 3776 log.go:172] (0xc000a109a0) Reply frame received for 5\nI0511 22:03:25.431160 3776 log.go:172] (0xc000a109a0) Data frame received for 3\nI0511 22:03:25.431195 3776 log.go:172] (0xc0009e4140) (3) Data frame handling\nI0511 22:03:25.431215 3776 log.go:172] (0xc0009e4140) (3) Data frame sent\nI0511 22:03:25.431484 3776 log.go:172] (0xc000a109a0) Data frame received for 3\nI0511 22:03:25.431530 3776 log.go:172] (0xc0009e4140) (3) Data frame handling\nI0511 22:03:25.431558 3776 log.go:172] (0xc000a109a0) Data frame received for 5\nI0511 22:03:25.431581 3776 log.go:172] (0xc0009e41e0) (5) Data frame handling\nI0511 22:03:25.431612 3776 log.go:172] (0xc0009e41e0) (5) Data frame sent\nI0511 22:03:25.431623 3776 log.go:172] (0xc000a109a0) Data frame received for 5\nI0511 22:03:25.431631 3776 log.go:172] (0xc0009e41e0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0511 22:03:25.432714 3776 log.go:172] (0xc000a109a0) Data frame received for 1\nI0511 22:03:25.432741 3776 log.go:172] (0xc0009fa320) (1) Data frame handling\nI0511 22:03:25.432769 3776 log.go:172] (0xc0009fa320) (1) Data frame sent\nI0511 22:03:25.432787 3776 log.go:172] (0xc000a109a0) (0xc0009fa320) Stream removed, broadcasting: 1\nI0511 22:03:25.432816 3776 log.go:172] (0xc000a109a0) Go away received\nI0511 22:03:25.433266 3776 log.go:172] (0xc000a109a0) (0xc0009fa320) Stream removed, broadcasting: 1\nI0511 22:03:25.433300 3776 log.go:172] (0xc000a109a0) (0xc0009e4140) Stream removed, broadcasting: 3\nI0511 22:03:25.433316 3776 log.go:172] (0xc000a109a0) (0xc0009e41e0) Stream removed, broadcasting: 5\n" May 11 22:03:25.437: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 11 22:03:25.437: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 11 22:03:35.514: INFO: Waiting for StatefulSet statefulset-8168/ss2 to complete update May 11 22:03:35.514: INFO: Waiting for Pod statefulset-8168/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 11 22:03:35.514: INFO: Waiting for Pod statefulset-8168/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 11 22:03:45.605: INFO: Waiting for StatefulSet statefulset-8168/ss2 to complete update May 11 22:03:45.605: INFO: Waiting for Pod statefulset-8168/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 11 22:03:45.605: INFO: Waiting for Pod statefulset-8168/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 11 22:03:56.430: INFO: Waiting for StatefulSet statefulset-8168/ss2 to complete update May 11 22:03:56.430: INFO: Waiting for Pod statefulset-8168/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 11 22:04:05.520: INFO: Waiting for StatefulSet statefulset-8168/ss2 to complete update May 11 22:04:05.520: INFO: Waiting for Pod statefulset-8168/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 11 22:04:15.863: INFO: Waiting for StatefulSet statefulset-8168/ss2 to complete update STEP: Rolling back to a previous revision May 11 22:04:25.523: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8168 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 11 22:04:26.738: INFO: stderr: "I0511 22:04:25.646286 3795 log.go:172] (0xc0009660b0) (0xc000613d60) Create stream\nI0511 22:04:25.646340 3795 log.go:172] (0xc0009660b0) (0xc000613d60) Stream added, broadcasting: 1\nI0511 22:04:25.648430 3795 log.go:172] (0xc0009660b0) Reply frame received for 1\nI0511 22:04:25.648491 3795 log.go:172] (0xc0009660b0) (0xc0005966e0) Create stream\nI0511 22:04:25.648520 3795 log.go:172] (0xc0009660b0) (0xc0005966e0) Stream added, broadcasting: 3\nI0511 22:04:25.649492 3795 log.go:172] (0xc0009660b0) Reply frame received for 3\nI0511 22:04:25.649538 3795 log.go:172] (0xc0009660b0) (0xc0003bec80) Create stream\nI0511 22:04:25.649550 3795 log.go:172] (0xc0009660b0) (0xc0003bec80) Stream added, broadcasting: 5\nI0511 22:04:25.650408 3795 log.go:172] (0xc0009660b0) Reply frame received for 5\nI0511 22:04:25.717820 3795 log.go:172] (0xc0009660b0) Data frame received for 5\nI0511 22:04:25.717847 3795 log.go:172] (0xc0003bec80) (5) Data frame handling\nI0511 22:04:25.717867 3795 log.go:172] (0xc0003bec80) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0511 22:04:26.730818 3795 log.go:172] (0xc0009660b0) Data frame received for 3\nI0511 22:04:26.730854 3795 log.go:172] (0xc0005966e0) (3) Data frame handling\nI0511 22:04:26.730867 3795 log.go:172] (0xc0005966e0) (3) Data frame sent\nI0511 22:04:26.731068 3795 log.go:172] (0xc0009660b0) Data frame received for 3\nI0511 22:04:26.731090 3795 log.go:172] (0xc0005966e0) (3) Data frame handling\nI0511 22:04:26.731120 3795 log.go:172] (0xc0009660b0) Data frame received for 5\nI0511 22:04:26.731135 3795 log.go:172] (0xc0003bec80) (5) Data frame handling\nI0511 22:04:26.733423 3795 log.go:172] (0xc0009660b0) Data frame received for 1\nI0511 22:04:26.733442 3795 log.go:172] (0xc000613d60) (1) Data frame handling\nI0511 22:04:26.733451 3795 log.go:172] (0xc000613d60) (1) Data frame sent\nI0511 22:04:26.733459 3795 log.go:172] (0xc0009660b0) (0xc000613d60) Stream removed, broadcasting: 1\nI0511 22:04:26.733470 3795 log.go:172] (0xc0009660b0) Go away received\nI0511 22:04:26.733855 3795 log.go:172] (0xc0009660b0) (0xc000613d60) Stream removed, broadcasting: 1\nI0511 22:04:26.733880 3795 log.go:172] (0xc0009660b0) (0xc0005966e0) Stream removed, broadcasting: 3\nI0511 22:04:26.733893 3795 log.go:172] (0xc0009660b0) (0xc0003bec80) Stream removed, broadcasting: 5\n" May 11 22:04:26.739: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 11 22:04:26.739: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 11 22:04:36.919: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order May 11 22:04:47.474: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8168 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 11 22:04:47.712: INFO: stderr: "I0511 22:04:47.651768 3817 log.go:172] (0xc000bec580) (0xc00066fd60) Create stream\nI0511 22:04:47.651833 3817 log.go:172] (0xc000bec580) (0xc00066fd60) Stream added, broadcasting: 1\nI0511 22:04:47.653755 3817 log.go:172] (0xc000bec580) Reply frame received for 1\nI0511 22:04:47.653794 3817 log.go:172] (0xc000bec580) (0xc000be0320) Create stream\nI0511 22:04:47.653808 3817 log.go:172] (0xc000bec580) (0xc000be0320) Stream added, broadcasting: 3\nI0511 22:04:47.654734 3817 log.go:172] (0xc000bec580) Reply frame received for 3\nI0511 22:04:47.654756 3817 log.go:172] (0xc000bec580) (0xc000be03c0) Create stream\nI0511 22:04:47.654764 3817 log.go:172] (0xc000bec580) (0xc000be03c0) Stream added, broadcasting: 5\nI0511 22:04:47.655683 3817 log.go:172] (0xc000bec580) Reply frame received for 5\nI0511 22:04:47.700411 3817 log.go:172] (0xc000bec580) Data frame received for 5\nI0511 22:04:47.700444 3817 log.go:172] (0xc000be03c0) (5) Data frame handling\nI0511 22:04:47.700466 3817 log.go:172] (0xc000be03c0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0511 22:04:47.705998 3817 log.go:172] (0xc000bec580) Data frame received for 5\nI0511 22:04:47.706028 3817 log.go:172] (0xc000be03c0) (5) Data frame handling\nI0511 22:04:47.706051 3817 log.go:172] (0xc000bec580) Data frame received for 3\nI0511 22:04:47.706072 3817 log.go:172] (0xc000be0320) (3) Data frame handling\nI0511 22:04:47.706098 3817 log.go:172] (0xc000be0320) (3) Data frame sent\nI0511 22:04:47.706112 3817 log.go:172] (0xc000bec580) Data frame received for 3\nI0511 22:04:47.706123 3817 log.go:172] (0xc000be0320) (3) Data frame handling\nI0511 22:04:47.707399 3817 log.go:172] (0xc000bec580) Data frame received for 1\nI0511 22:04:47.707432 3817 log.go:172] (0xc00066fd60) (1) Data frame handling\nI0511 22:04:47.707454 3817 log.go:172] (0xc00066fd60) (1) Data frame sent\nI0511 22:04:47.707469 3817 log.go:172] (0xc000bec580) (0xc00066fd60) Stream removed, broadcasting: 1\nI0511 22:04:47.707483 3817 log.go:172] (0xc000bec580) Go away received\nI0511 22:04:47.707885 3817 log.go:172] (0xc000bec580) (0xc00066fd60) Stream removed, broadcasting: 1\nI0511 22:04:47.707909 3817 log.go:172] (0xc000bec580) (0xc000be0320) Stream removed, broadcasting: 3\nI0511 22:04:47.707922 3817 log.go:172] (0xc000bec580) (0xc000be03c0) Stream removed, broadcasting: 5\n" May 11 22:04:47.712: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 11 22:04:47.712: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 11 22:04:57.730: INFO: Waiting for StatefulSet statefulset-8168/ss2 to complete update May 11 22:04:57.730: INFO: Waiting for Pod statefulset-8168/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 May 11 22:04:57.730: INFO: Waiting for Pod statefulset-8168/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 May 11 22:04:57.730: INFO: Waiting for Pod statefulset-8168/ss2-2 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 May 11 22:05:08.672: INFO: Waiting for StatefulSet statefulset-8168/ss2 to complete update May 11 22:05:08.672: INFO: Waiting for Pod statefulset-8168/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 May 11 22:05:08.672: INFO: Waiting for Pod statefulset-8168/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 May 11 22:05:17.738: INFO: Waiting for StatefulSet statefulset-8168/ss2 to complete update May 11 22:05:17.738: INFO: Waiting for Pod statefulset-8168/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 May 11 22:05:17.738: INFO: Waiting for Pod statefulset-8168/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 May 11 22:05:27.950: INFO: Waiting for StatefulSet statefulset-8168/ss2 to complete update May 11 22:05:27.950: INFO: Waiting for Pod statefulset-8168/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 May 11 22:05:37.786: INFO: Deleting all statefulset in ns statefulset-8168 May 11 22:05:37.788: INFO: Scaling statefulset ss2 to 0 May 11 22:06:17.843: INFO: Waiting for statefulset status.replicas updated to 0 May 11 22:06:17.845: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 22:06:18.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-8168" for this suite. • [SLOW TEST:245.962 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":278,"completed":232,"skipped":3742,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 22:06:19.033: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1357 STEP: creating an pod May 11 22:06:19.723: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run logs-generator --generator=run-pod/v1 --image=gcr.io/kubernetes-e2e-test-images/agnhost:2.8 --namespace=kubectl-5177 -- logs-generator --log-lines-total 100 --run-duration 20s' May 11 22:06:19.865: INFO: stderr: "" May 11 22:06:19.865: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Waiting for log generator to start. May 11 22:06:19.866: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] May 11 22:06:19.866: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-5177" to be "running and ready, or succeeded" May 11 22:06:19.901: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 35.426423ms May 11 22:06:22.104: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.238681731s May 11 22:06:24.216: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 4.350600817s May 11 22:06:26.373: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 6.50771086s May 11 22:06:26.373: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" May 11 22:06:26.373: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings May 11 22:06:26.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-5177' May 11 22:06:26.571: INFO: stderr: "" May 11 22:06:26.571: INFO: stdout: "I0511 22:06:24.101612 1 logs_generator.go:76] 0 PUT /api/v1/namespaces/kube-system/pods/zj4n 590\nI0511 22:06:24.301743 1 logs_generator.go:76] 1 PUT /api/v1/namespaces/ns/pods/mdp 539\nI0511 22:06:24.501785 1 logs_generator.go:76] 2 GET /api/v1/namespaces/kube-system/pods/gfrc 417\nI0511 22:06:24.701909 1 logs_generator.go:76] 3 GET /api/v1/namespaces/kube-system/pods/hkt 448\nI0511 22:06:24.901750 1 logs_generator.go:76] 4 GET /api/v1/namespaces/kube-system/pods/ct8 310\nI0511 22:06:25.101811 1 logs_generator.go:76] 5 PUT /api/v1/namespaces/kube-system/pods/f4n 382\nI0511 22:06:25.301750 1 logs_generator.go:76] 6 POST /api/v1/namespaces/kube-system/pods/8r7 290\nI0511 22:06:25.501811 1 logs_generator.go:76] 7 PUT /api/v1/namespaces/kube-system/pods/qdp 354\nI0511 22:06:25.701833 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/ns/pods/84f 305\nI0511 22:06:25.901816 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/default/pods/hxn 256\nI0511 22:06:26.101768 1 logs_generator.go:76] 10 POST /api/v1/namespaces/default/pods/nrhn 463\nI0511 22:06:26.301795 1 logs_generator.go:76] 11 PUT /api/v1/namespaces/ns/pods/rrd 492\nI0511 22:06:26.501779 1 logs_generator.go:76] 12 GET /api/v1/namespaces/default/pods/jhd 360\n" STEP: limiting log lines May 11 22:06:26.571: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-5177 --tail=1' May 11 22:06:27.309: INFO: stderr: "" May 11 22:06:27.309: INFO: stdout: "I0511 22:06:27.301808 1 logs_generator.go:76] 16 POST /api/v1/namespaces/ns/pods/r69z 248\n" May 11 22:06:27.309: INFO: got output "I0511 22:06:27.301808 1 logs_generator.go:76] 16 POST /api/v1/namespaces/ns/pods/r69z 248\n" STEP: limiting log bytes May 11 22:06:27.309: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-5177 --limit-bytes=1' May 11 22:06:27.632: INFO: stderr: "" May 11 22:06:27.632: INFO: stdout: "I" May 11 22:06:27.632: INFO: got output "I" STEP: exposing timestamps May 11 22:06:27.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-5177 --tail=1 --timestamps' May 11 22:06:27.966: INFO: stderr: "" May 11 22:06:27.966: INFO: stdout: "2020-05-11T22:06:27.901946073Z I0511 22:06:27.901806 1 logs_generator.go:76] 19 POST /api/v1/namespaces/ns/pods/khk 308\n" May 11 22:06:27.966: INFO: got output "2020-05-11T22:06:27.901946073Z I0511 22:06:27.901806 1 logs_generator.go:76] 19 POST /api/v1/namespaces/ns/pods/khk 308\n" STEP: restricting to a time range May 11 22:06:30.466: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-5177 --since=1s' May 11 22:06:30.988: INFO: stderr: "" May 11 22:06:30.988: INFO: stdout: "I0511 22:06:30.101699 1 logs_generator.go:76] 30 PUT /api/v1/namespaces/kube-system/pods/wrgl 496\nI0511 22:06:30.301775 1 logs_generator.go:76] 31 POST /api/v1/namespaces/default/pods/m9l 359\nI0511 22:06:30.501768 1 logs_generator.go:76] 32 POST /api/v1/namespaces/default/pods/gkrd 343\nI0511 22:06:30.701742 1 logs_generator.go:76] 33 GET /api/v1/namespaces/default/pods/f8jm 509\nI0511 22:06:30.901763 1 logs_generator.go:76] 34 POST /api/v1/namespaces/kube-system/pods/kgv 500\n" May 11 22:06:30.988: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-5177 --since=24h' May 11 22:06:31.997: INFO: stderr: "" May 11 22:06:31.997: INFO: stdout: "I0511 22:06:24.101612 1 logs_generator.go:76] 0 PUT /api/v1/namespaces/kube-system/pods/zj4n 590\nI0511 22:06:24.301743 1 logs_generator.go:76] 1 PUT /api/v1/namespaces/ns/pods/mdp 539\nI0511 22:06:24.501785 1 logs_generator.go:76] 2 GET /api/v1/namespaces/kube-system/pods/gfrc 417\nI0511 22:06:24.701909 1 logs_generator.go:76] 3 GET /api/v1/namespaces/kube-system/pods/hkt 448\nI0511 22:06:24.901750 1 logs_generator.go:76] 4 GET /api/v1/namespaces/kube-system/pods/ct8 310\nI0511 22:06:25.101811 1 logs_generator.go:76] 5 PUT /api/v1/namespaces/kube-system/pods/f4n 382\nI0511 22:06:25.301750 1 logs_generator.go:76] 6 POST /api/v1/namespaces/kube-system/pods/8r7 290\nI0511 22:06:25.501811 1 logs_generator.go:76] 7 PUT /api/v1/namespaces/kube-system/pods/qdp 354\nI0511 22:06:25.701833 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/ns/pods/84f 305\nI0511 22:06:25.901816 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/default/pods/hxn 256\nI0511 22:06:26.101768 1 logs_generator.go:76] 10 POST /api/v1/namespaces/default/pods/nrhn 463\nI0511 22:06:26.301795 1 logs_generator.go:76] 11 PUT /api/v1/namespaces/ns/pods/rrd 492\nI0511 22:06:26.501779 1 logs_generator.go:76] 12 GET /api/v1/namespaces/default/pods/jhd 360\nI0511 22:06:26.701753 1 logs_generator.go:76] 13 PUT /api/v1/namespaces/default/pods/qfc 348\nI0511 22:06:26.901816 1 logs_generator.go:76] 14 GET /api/v1/namespaces/default/pods/h9rv 274\nI0511 22:06:27.101776 1 logs_generator.go:76] 15 GET /api/v1/namespaces/default/pods/hcnx 236\nI0511 22:06:27.301808 1 logs_generator.go:76] 16 POST /api/v1/namespaces/ns/pods/r69z 248\nI0511 22:06:27.501747 1 logs_generator.go:76] 17 GET /api/v1/namespaces/kube-system/pods/p9j 415\nI0511 22:06:27.701773 1 logs_generator.go:76] 18 POST /api/v1/namespaces/default/pods/l4w5 338\nI0511 22:06:27.901806 1 logs_generator.go:76] 19 POST /api/v1/namespaces/ns/pods/khk 308\nI0511 22:06:28.101726 1 logs_generator.go:76] 20 PUT /api/v1/namespaces/kube-system/pods/frm 349\nI0511 22:06:28.301796 1 logs_generator.go:76] 21 GET /api/v1/namespaces/default/pods/mmn 361\nI0511 22:06:28.501771 1 logs_generator.go:76] 22 GET /api/v1/namespaces/default/pods/t96c 389\nI0511 22:06:28.701775 1 logs_generator.go:76] 23 PUT /api/v1/namespaces/kube-system/pods/h5q 308\nI0511 22:06:28.901727 1 logs_generator.go:76] 24 PUT /api/v1/namespaces/default/pods/g82 345\nI0511 22:06:29.101925 1 logs_generator.go:76] 25 GET /api/v1/namespaces/ns/pods/5sm 262\nI0511 22:06:29.301699 1 logs_generator.go:76] 26 PUT /api/v1/namespaces/default/pods/l8cb 574\nI0511 22:06:29.501770 1 logs_generator.go:76] 27 POST /api/v1/namespaces/ns/pods/zkd 319\nI0511 22:06:29.702246 1 logs_generator.go:76] 28 POST /api/v1/namespaces/default/pods/bxd 562\nI0511 22:06:29.901706 1 logs_generator.go:76] 29 POST /api/v1/namespaces/kube-system/pods/w9s 563\nI0511 22:06:30.101699 1 logs_generator.go:76] 30 PUT /api/v1/namespaces/kube-system/pods/wrgl 496\nI0511 22:06:30.301775 1 logs_generator.go:76] 31 POST /api/v1/namespaces/default/pods/m9l 359\nI0511 22:06:30.501768 1 logs_generator.go:76] 32 POST /api/v1/namespaces/default/pods/gkrd 343\nI0511 22:06:30.701742 1 logs_generator.go:76] 33 GET /api/v1/namespaces/default/pods/f8jm 509\nI0511 22:06:30.901763 1 logs_generator.go:76] 34 POST /api/v1/namespaces/kube-system/pods/kgv 500\nI0511 22:06:31.101751 1 logs_generator.go:76] 35 PUT /api/v1/namespaces/kube-system/pods/vkdr 364\nI0511 22:06:31.301767 1 logs_generator.go:76] 36 POST /api/v1/namespaces/ns/pods/rb8 248\nI0511 22:06:31.501775 1 logs_generator.go:76] 37 PUT /api/v1/namespaces/default/pods/srv7 588\nI0511 22:06:31.701766 1 logs_generator.go:76] 38 GET /api/v1/namespaces/kube-system/pods/vdbl 413\nI0511 22:06:31.901768 1 logs_generator.go:76] 39 PUT /api/v1/namespaces/ns/pods/747 463\n" [AfterEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1363 May 11 22:06:31.998: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-5177' May 11 22:06:39.879: INFO: stderr: "" May 11 22:06:39.879: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 22:06:39.879: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5177" for this suite. • [SLOW TEST:20.866 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1353 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":278,"completed":233,"skipped":3754,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 22:06:39.899: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1489 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 11 22:06:40.769: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-6441' May 11 22:06:40.987: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 11 22:06:40.987: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n" STEP: verifying the pod controlled by e2e-test-httpd-deployment gets created [AfterEach] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1495 May 11 22:06:43.630: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-6441' May 11 22:06:43.944: INFO: stderr: "" May 11 22:06:43.944: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 22:06:43.944: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6441" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image [Conformance]","total":278,"completed":234,"skipped":3772,"failed":0} SSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 22:06:44.286: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 11 22:06:45.359: INFO: Waiting up to 5m0s for pod "downwardapi-volume-09942941-666e-4719-a5fe-c9588e82c848" in namespace "projected-1801" to be "success or failure" May 11 22:06:45.585: INFO: Pod "downwardapi-volume-09942941-666e-4719-a5fe-c9588e82c848": Phase="Pending", Reason="", readiness=false. Elapsed: 225.352872ms May 11 22:06:47.678: INFO: Pod "downwardapi-volume-09942941-666e-4719-a5fe-c9588e82c848": Phase="Pending", Reason="", readiness=false. Elapsed: 2.318656419s May 11 22:06:50.026: INFO: Pod "downwardapi-volume-09942941-666e-4719-a5fe-c9588e82c848": Phase="Pending", Reason="", readiness=false. Elapsed: 4.666734529s May 11 22:06:52.239: INFO: Pod "downwardapi-volume-09942941-666e-4719-a5fe-c9588e82c848": Phase="Pending", Reason="", readiness=false. Elapsed: 6.879312602s May 11 22:06:54.242: INFO: Pod "downwardapi-volume-09942941-666e-4719-a5fe-c9588e82c848": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.882962186s STEP: Saw pod success May 11 22:06:54.242: INFO: Pod "downwardapi-volume-09942941-666e-4719-a5fe-c9588e82c848" satisfied condition "success or failure" May 11 22:06:54.245: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-09942941-666e-4719-a5fe-c9588e82c848 container client-container: STEP: delete the pod May 11 22:06:54.472: INFO: Waiting for pod downwardapi-volume-09942941-666e-4719-a5fe-c9588e82c848 to disappear May 11 22:06:54.575: INFO: Pod downwardapi-volume-09942941-666e-4719-a5fe-c9588e82c848 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 22:06:54.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1801" for this suite. • [SLOW TEST:10.295 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":235,"skipped":3776,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 22:06:54.582: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation May 11 22:06:55.052: INFO: >>> kubeConfig: /root/.kube/config May 11 22:06:58.017: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 22:07:09.999: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1000" for this suite. • [SLOW TEST:15.441 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":278,"completed":236,"skipped":3831,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 22:07:10.024: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 22:07:24.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-204" for this suite. • [SLOW TEST:14.746 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":278,"completed":237,"skipped":3869,"failed":0} SS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 22:07:24.770: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 11 22:07:26.011: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 11 22:07:28.120: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724831646, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724831646, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724831646, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724831645, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 22:07:30.134: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724831646, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724831646, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724831646, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724831645, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 22:07:32.268: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724831646, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724831646, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724831646, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724831645, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 11 22:07:35.488: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 11 22:07:35.492: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-3391-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 22:07:36.889: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9871" for this suite. STEP: Destroying namespace "webhook-9871-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:13.932 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":278,"completed":238,"skipped":3871,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 22:07:38.702: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted May 11 22:07:52.609: INFO: 5 pods remaining May 11 22:07:52.609: INFO: 5 pods has nil DeletionTimestamp May 11 22:07:52.609: INFO: STEP: Gathering metrics W0511 22:07:56.851751 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 11 22:07:56.851: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 22:07:56.851: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-584" for this suite. • [SLOW TEST:18.154 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":278,"completed":239,"skipped":3894,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 22:07:56.857: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 11 22:07:57.971: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 11 22:07:59.981: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724831678, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724831678, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724831678, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724831677, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 22:08:02.583: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724831678, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724831678, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724831678, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724831677, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 11 22:08:05.461: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 22:08:07.495: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2262" for this suite. STEP: Destroying namespace "webhook-2262-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:12.629 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":278,"completed":240,"skipped":3940,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 22:08:09.486: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 11 22:08:20.696: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 22:08:21.169: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-3443" for this suite. • [SLOW TEST:11.840 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131 should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":241,"skipped":3958,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 22:08:21.326: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 11 22:08:21.527: INFO: Waiting up to 5m0s for pod "downwardapi-volume-665eabd8-e7c4-4e66-aa7f-d02c3138b00b" in namespace "downward-api-5650" to be "success or failure" May 11 22:08:21.566: INFO: Pod "downwardapi-volume-665eabd8-e7c4-4e66-aa7f-d02c3138b00b": Phase="Pending", Reason="", readiness=false. Elapsed: 39.292038ms May 11 22:08:23.679: INFO: Pod "downwardapi-volume-665eabd8-e7c4-4e66-aa7f-d02c3138b00b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.152426956s May 11 22:08:25.683: INFO: Pod "downwardapi-volume-665eabd8-e7c4-4e66-aa7f-d02c3138b00b": Phase="Running", Reason="", readiness=true. Elapsed: 4.155815506s May 11 22:08:27.686: INFO: Pod "downwardapi-volume-665eabd8-e7c4-4e66-aa7f-d02c3138b00b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.159241966s STEP: Saw pod success May 11 22:08:27.686: INFO: Pod "downwardapi-volume-665eabd8-e7c4-4e66-aa7f-d02c3138b00b" satisfied condition "success or failure" May 11 22:08:27.689: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-665eabd8-e7c4-4e66-aa7f-d02c3138b00b container client-container: STEP: delete the pod May 11 22:08:27.786: INFO: Waiting for pod downwardapi-volume-665eabd8-e7c4-4e66-aa7f-d02c3138b00b to disappear May 11 22:08:27.916: INFO: Pod downwardapi-volume-665eabd8-e7c4-4e66-aa7f-d02c3138b00b no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 22:08:27.917: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5650" for this suite. • [SLOW TEST:6.759 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":242,"skipped":3972,"failed":0} SSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 22:08:28.085: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 22:09:28.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5255" for this suite. • [SLOW TEST:60.231 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":278,"completed":243,"skipped":3976,"failed":0} SSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 22:09:28.317: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 May 11 22:09:28.411: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 11 22:09:28.477: INFO: Waiting for terminating namespaces to be deleted... May 11 22:09:28.480: INFO: Logging pods the kubelet thinks is on node jerma-worker before test May 11 22:09:28.523: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 11 22:09:28.523: INFO: Container kindnet-cni ready: true, restart count 0 May 11 22:09:28.523: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 11 22:09:28.523: INFO: Container kube-proxy ready: true, restart count 0 May 11 22:09:28.523: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test May 11 22:09:28.535: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) May 11 22:09:28.535: INFO: Container kube-bench ready: false, restart count 0 May 11 22:09:28.535: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 11 22:09:28.535: INFO: Container kindnet-cni ready: true, restart count 0 May 11 22:09:28.535: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 11 22:09:28.535: INFO: Container kube-proxy ready: true, restart count 0 May 11 22:09:28.535: INFO: test-webserver-af506c6d-c334-426c-a313-c90c51cd2059 from container-probe-5255 started at 2020-05-11 22:08:28 +0000 UTC (1 container statuses recorded) May 11 22:09:28.535: INFO: Container test-webserver ready: false, restart count 0 May 11 22:09:28.535: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) May 11 22:09:28.535: INFO: Container kube-hunter ready: false, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.160e18c8a5190c0d], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 22:09:29.554: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-2464" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":278,"completed":244,"skipped":3980,"failed":0} SSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 22:09:29.562: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-d881356d-003f-41e7-860b-da7d5b5ac471 STEP: Creating a pod to test consume secrets May 11 22:09:29.711: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-df014ebe-01ea-44dc-b9a1-2051889dcc66" in namespace "projected-5304" to be "success or failure" May 11 22:09:29.794: INFO: Pod "pod-projected-secrets-df014ebe-01ea-44dc-b9a1-2051889dcc66": Phase="Pending", Reason="", readiness=false. Elapsed: 83.053748ms May 11 22:09:31.806: INFO: Pod "pod-projected-secrets-df014ebe-01ea-44dc-b9a1-2051889dcc66": Phase="Pending", Reason="", readiness=false. Elapsed: 2.095289404s May 11 22:09:33.809: INFO: Pod "pod-projected-secrets-df014ebe-01ea-44dc-b9a1-2051889dcc66": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.098424795s STEP: Saw pod success May 11 22:09:33.809: INFO: Pod "pod-projected-secrets-df014ebe-01ea-44dc-b9a1-2051889dcc66" satisfied condition "success or failure" May 11 22:09:33.811: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-df014ebe-01ea-44dc-b9a1-2051889dcc66 container projected-secret-volume-test: STEP: delete the pod May 11 22:09:33.861: INFO: Waiting for pod pod-projected-secrets-df014ebe-01ea-44dc-b9a1-2051889dcc66 to disappear May 11 22:09:33.961: INFO: Pod pod-projected-secrets-df014ebe-01ea-44dc-b9a1-2051889dcc66 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 22:09:33.961: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5304" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":245,"skipped":3983,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 22:09:33.970: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-8410 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a new StatefulSet May 11 22:09:34.431: INFO: Found 0 stateful pods, waiting for 3 May 11 22:09:44.648: INFO: Found 2 stateful pods, waiting for 3 May 11 22:09:54.435: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 11 22:09:54.435: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 11 22:09:54.435: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false May 11 22:10:04.436: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 11 22:10:04.436: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 11 22:10:04.436: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine May 11 22:10:04.462: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update May 11 22:10:14.494: INFO: Updating stateful set ss2 May 11 22:10:14.530: INFO: Waiting for Pod statefulset-8410/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Restoring Pods to the correct revision when they are deleted May 11 22:10:24.708: INFO: Found 2 stateful pods, waiting for 3 May 11 22:10:34.968: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 11 22:10:34.968: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 11 22:10:34.968: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=false May 11 22:10:44.976: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 11 22:10:44.976: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 11 22:10:44.976: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update May 11 22:10:45.426: INFO: Updating stateful set ss2 May 11 22:10:45.556: INFO: Waiting for Pod statefulset-8410/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 11 22:10:55.563: INFO: Waiting for Pod statefulset-8410/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 11 22:11:05.695: INFO: Updating stateful set ss2 May 11 22:11:05.766: INFO: Waiting for StatefulSet statefulset-8410/ss2 to complete update May 11 22:11:05.766: INFO: Waiting for Pod statefulset-8410/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 11 22:11:15.774: INFO: Waiting for StatefulSet statefulset-8410/ss2 to complete update May 11 22:11:15.774: INFO: Waiting for Pod statefulset-8410/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 May 11 22:11:25.774: INFO: Deleting all statefulset in ns statefulset-8410 May 11 22:11:25.776: INFO: Scaling statefulset ss2 to 0 May 11 22:11:55.800: INFO: Waiting for statefulset status.replicas updated to 0 May 11 22:11:55.803: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 22:11:55.817: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-8410" for this suite. • [SLOW TEST:141.853 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":278,"completed":246,"skipped":4029,"failed":0} SSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 22:11:55.823: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-17cb0bec-4a24-4c80-b9ee-f1d03b099774 STEP: Creating a pod to test consume configMaps May 11 22:11:55.950: INFO: Waiting up to 5m0s for pod "pod-configmaps-fc37b112-a06b-4795-9883-91f8c2405420" in namespace "configmap-9148" to be "success or failure" May 11 22:11:55.954: INFO: Pod "pod-configmaps-fc37b112-a06b-4795-9883-91f8c2405420": Phase="Pending", Reason="", readiness=false. Elapsed: 3.551956ms May 11 22:11:58.042: INFO: Pod "pod-configmaps-fc37b112-a06b-4795-9883-91f8c2405420": Phase="Pending", Reason="", readiness=false. Elapsed: 2.091417875s May 11 22:12:00.044: INFO: Pod "pod-configmaps-fc37b112-a06b-4795-9883-91f8c2405420": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.094106521s STEP: Saw pod success May 11 22:12:00.044: INFO: Pod "pod-configmaps-fc37b112-a06b-4795-9883-91f8c2405420" satisfied condition "success or failure" May 11 22:12:00.046: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-fc37b112-a06b-4795-9883-91f8c2405420 container configmap-volume-test: STEP: delete the pod May 11 22:12:00.233: INFO: Waiting for pod pod-configmaps-fc37b112-a06b-4795-9883-91f8c2405420 to disappear May 11 22:12:00.304: INFO: Pod pod-configmaps-fc37b112-a06b-4795-9883-91f8c2405420 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 22:12:00.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9148" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":247,"skipped":4033,"failed":0} S ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 22:12:00.312: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: set up a multi version CRD May 11 22:12:00.379: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 22:12:15.524: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6754" for this suite. • [SLOW TEST:15.242 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":278,"completed":248,"skipped":4034,"failed":0} SSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 22:12:15.554: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook May 11 22:12:23.842: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 11 22:12:23.847: INFO: Pod pod-with-poststart-exec-hook still exists May 11 22:12:25.847: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 11 22:12:25.850: INFO: Pod pod-with-poststart-exec-hook still exists May 11 22:12:27.847: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 11 22:12:27.851: INFO: Pod pod-with-poststart-exec-hook still exists May 11 22:12:29.847: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 11 22:12:29.851: INFO: Pod pod-with-poststart-exec-hook still exists May 11 22:12:31.847: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 11 22:12:31.850: INFO: Pod pod-with-poststart-exec-hook still exists May 11 22:12:33.847: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 11 22:12:33.851: INFO: Pod pod-with-poststart-exec-hook still exists May 11 22:12:35.847: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 11 22:12:35.850: INFO: Pod pod-with-poststart-exec-hook still exists May 11 22:12:37.847: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 11 22:12:37.849: INFO: Pod pod-with-poststart-exec-hook still exists May 11 22:12:39.847: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 11 22:12:39.850: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 22:12:39.851: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-6288" for this suite. • [SLOW TEST:24.303 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":278,"completed":249,"skipped":4038,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 22:12:39.858: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars May 11 22:12:40.241: INFO: Waiting up to 5m0s for pod "downward-api-e1c183b4-181d-4748-b7d3-6d560ee3e13c" in namespace "downward-api-590" to be "success or failure" May 11 22:12:40.283: INFO: Pod "downward-api-e1c183b4-181d-4748-b7d3-6d560ee3e13c": Phase="Pending", Reason="", readiness=false. Elapsed: 41.555434ms May 11 22:12:42.287: INFO: Pod "downward-api-e1c183b4-181d-4748-b7d3-6d560ee3e13c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045859047s May 11 22:12:44.291: INFO: Pod "downward-api-e1c183b4-181d-4748-b7d3-6d560ee3e13c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.049480829s May 11 22:12:46.294: INFO: Pod "downward-api-e1c183b4-181d-4748-b7d3-6d560ee3e13c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.05239017s STEP: Saw pod success May 11 22:12:46.294: INFO: Pod "downward-api-e1c183b4-181d-4748-b7d3-6d560ee3e13c" satisfied condition "success or failure" May 11 22:12:46.295: INFO: Trying to get logs from node jerma-worker pod downward-api-e1c183b4-181d-4748-b7d3-6d560ee3e13c container dapi-container: STEP: delete the pod May 11 22:12:46.372: INFO: Waiting for pod downward-api-e1c183b4-181d-4748-b7d3-6d560ee3e13c to disappear May 11 22:12:46.392: INFO: Pod downward-api-e1c183b4-181d-4748-b7d3-6d560ee3e13c no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 22:12:46.392: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-590" for this suite. • [SLOW TEST:6.538 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33 should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":278,"completed":250,"skipped":4070,"failed":0} SSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 22:12:46.396: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook May 11 22:12:57.191: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 11 22:12:57.207: INFO: Pod pod-with-prestop-http-hook still exists May 11 22:12:59.207: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 11 22:12:59.210: INFO: Pod pod-with-prestop-http-hook still exists May 11 22:13:01.207: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 11 22:13:01.213: INFO: Pod pod-with-prestop-http-hook still exists May 11 22:13:03.207: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 11 22:13:03.211: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 22:13:03.219: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-3522" for this suite. • [SLOW TEST:16.833 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":278,"completed":251,"skipped":4075,"failed":0} SSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 22:13:03.229: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-7876 STEP: creating a selector STEP: Creating the service pods in kubernetes May 11 22:13:03.344: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 11 22:13:31.517: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.21:8080/dial?request=hostname&protocol=udp&host=10.244.1.20&port=8081&tries=1'] Namespace:pod-network-test-7876 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 11 22:13:31.517: INFO: >>> kubeConfig: /root/.kube/config I0511 22:13:31.543005 6 log.go:172] (0xc001c6c2c0) (0xc002480280) Create stream I0511 22:13:31.543031 6 log.go:172] (0xc001c6c2c0) (0xc002480280) Stream added, broadcasting: 1 I0511 22:13:31.544043 6 log.go:172] (0xc001c6c2c0) Reply frame received for 1 I0511 22:13:31.544069 6 log.go:172] (0xc001c6c2c0) (0xc001732320) Create stream I0511 22:13:31.544079 6 log.go:172] (0xc001c6c2c0) (0xc001732320) Stream added, broadcasting: 3 I0511 22:13:31.544557 6 log.go:172] (0xc001c6c2c0) Reply frame received for 3 I0511 22:13:31.544578 6 log.go:172] (0xc001c6c2c0) (0xc002480460) Create stream I0511 22:13:31.544585 6 log.go:172] (0xc001c6c2c0) (0xc002480460) Stream added, broadcasting: 5 I0511 22:13:31.545070 6 log.go:172] (0xc001c6c2c0) Reply frame received for 5 I0511 22:13:31.627144 6 log.go:172] (0xc001c6c2c0) Data frame received for 3 I0511 22:13:31.627161 6 log.go:172] (0xc001732320) (3) Data frame handling I0511 22:13:31.627174 6 log.go:172] (0xc001732320) (3) Data frame sent I0511 22:13:31.627789 6 log.go:172] (0xc001c6c2c0) Data frame received for 5 I0511 22:13:31.627799 6 log.go:172] (0xc002480460) (5) Data frame handling I0511 22:13:31.627814 6 log.go:172] (0xc001c6c2c0) Data frame received for 3 I0511 22:13:31.627826 6 log.go:172] (0xc001732320) (3) Data frame handling I0511 22:13:31.629094 6 log.go:172] (0xc001c6c2c0) Data frame received for 1 I0511 22:13:31.629104 6 log.go:172] (0xc002480280) (1) Data frame handling I0511 22:13:31.629203 6 log.go:172] (0xc002480280) (1) Data frame sent I0511 22:13:31.629218 6 log.go:172] (0xc001c6c2c0) (0xc002480280) Stream removed, broadcasting: 1 I0511 22:13:31.629307 6 log.go:172] (0xc001c6c2c0) (0xc002480280) Stream removed, broadcasting: 1 I0511 22:13:31.629325 6 log.go:172] (0xc001c6c2c0) (0xc001732320) Stream removed, broadcasting: 3 I0511 22:13:31.629337 6 log.go:172] (0xc001c6c2c0) (0xc002480460) Stream removed, broadcasting: 5 May 11 22:13:31.629: INFO: Waiting for responses: map[] I0511 22:13:31.629426 6 log.go:172] (0xc001c6c2c0) Go away received May 11 22:13:31.631: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.21:8080/dial?request=hostname&protocol=udp&host=10.244.2.214&port=8081&tries=1'] Namespace:pod-network-test-7876 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 11 22:13:31.631: INFO: >>> kubeConfig: /root/.kube/config I0511 22:13:31.654791 6 log.go:172] (0xc001c6ca50) (0xc002480aa0) Create stream I0511 22:13:31.654818 6 log.go:172] (0xc001c6ca50) (0xc002480aa0) Stream added, broadcasting: 1 I0511 22:13:31.656397 6 log.go:172] (0xc001c6ca50) Reply frame received for 1 I0511 22:13:31.656442 6 log.go:172] (0xc001c6ca50) (0xc0029888c0) Create stream I0511 22:13:31.656458 6 log.go:172] (0xc001c6ca50) (0xc0029888c0) Stream added, broadcasting: 3 I0511 22:13:31.657601 6 log.go:172] (0xc001c6ca50) Reply frame received for 3 I0511 22:13:31.657631 6 log.go:172] (0xc001c6ca50) (0xc001732640) Create stream I0511 22:13:31.657641 6 log.go:172] (0xc001c6ca50) (0xc001732640) Stream added, broadcasting: 5 I0511 22:13:31.658447 6 log.go:172] (0xc001c6ca50) Reply frame received for 5 I0511 22:13:31.708807 6 log.go:172] (0xc001c6ca50) Data frame received for 3 I0511 22:13:31.708821 6 log.go:172] (0xc0029888c0) (3) Data frame handling I0511 22:13:31.708830 6 log.go:172] (0xc0029888c0) (3) Data frame sent I0511 22:13:31.708834 6 log.go:172] (0xc001c6ca50) Data frame received for 3 I0511 22:13:31.708838 6 log.go:172] (0xc0029888c0) (3) Data frame handling I0511 22:13:31.708990 6 log.go:172] (0xc001c6ca50) Data frame received for 5 I0511 22:13:31.709011 6 log.go:172] (0xc001732640) (5) Data frame handling I0511 22:13:31.709953 6 log.go:172] (0xc001c6ca50) Data frame received for 1 I0511 22:13:31.709964 6 log.go:172] (0xc002480aa0) (1) Data frame handling I0511 22:13:31.709969 6 log.go:172] (0xc002480aa0) (1) Data frame sent I0511 22:13:31.709980 6 log.go:172] (0xc001c6ca50) (0xc002480aa0) Stream removed, broadcasting: 1 I0511 22:13:31.709989 6 log.go:172] (0xc001c6ca50) Go away received I0511 22:13:31.710102 6 log.go:172] (0xc001c6ca50) (0xc002480aa0) Stream removed, broadcasting: 1 I0511 22:13:31.710115 6 log.go:172] (0xc001c6ca50) (0xc0029888c0) Stream removed, broadcasting: 3 I0511 22:13:31.710120 6 log.go:172] (0xc001c6ca50) (0xc001732640) Stream removed, broadcasting: 5 May 11 22:13:31.710: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 22:13:31.710: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-7876" for this suite. • [SLOW TEST:28.504 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":252,"skipped":4080,"failed":0} [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 22:13:31.733: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on node default medium May 11 22:13:31.795: INFO: Waiting up to 5m0s for pod "pod-8877c935-bcf4-45c4-88ed-560d4c7cc80a" in namespace "emptydir-8525" to be "success or failure" May 11 22:13:31.827: INFO: Pod "pod-8877c935-bcf4-45c4-88ed-560d4c7cc80a": Phase="Pending", Reason="", readiness=false. Elapsed: 31.953113ms May 11 22:13:33.831: INFO: Pod "pod-8877c935-bcf4-45c4-88ed-560d4c7cc80a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036132123s May 11 22:13:35.835: INFO: Pod "pod-8877c935-bcf4-45c4-88ed-560d4c7cc80a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040142061s May 11 22:13:37.875: INFO: Pod "pod-8877c935-bcf4-45c4-88ed-560d4c7cc80a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.079440425s STEP: Saw pod success May 11 22:13:37.875: INFO: Pod "pod-8877c935-bcf4-45c4-88ed-560d4c7cc80a" satisfied condition "success or failure" May 11 22:13:37.888: INFO: Trying to get logs from node jerma-worker pod pod-8877c935-bcf4-45c4-88ed-560d4c7cc80a container test-container: STEP: delete the pod May 11 22:13:37.963: INFO: Waiting for pod pod-8877c935-bcf4-45c4-88ed-560d4c7cc80a to disappear May 11 22:13:38.216: INFO: Pod pod-8877c935-bcf4-45c4-88ed-560d4c7cc80a no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 22:13:38.217: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8525" for this suite. • [SLOW TEST:6.493 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":253,"skipped":4080,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 22:13:38.227: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod May 11 22:13:43.684: INFO: Successfully updated pod "annotationupdate9f6c6ef4-4c39-4102-adf0-16c18ba7f111" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 22:13:47.903: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6294" for this suite. • [SLOW TEST:9.710 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":254,"skipped":4133,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 22:13:47.937: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:46 [It] should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 22:13:48.024: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-9505" for this suite. •{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":278,"completed":255,"skipped":4156,"failed":0} ------------------------------ [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 22:13:48.030: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating Agnhost RC May 11 22:13:48.136: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7793' May 11 22:13:54.934: INFO: stderr: "" May 11 22:13:54.934: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. May 11 22:13:55.937: INFO: Selector matched 1 pods for map[app:agnhost] May 11 22:13:55.937: INFO: Found 0 / 1 May 11 22:13:57.091: INFO: Selector matched 1 pods for map[app:agnhost] May 11 22:13:57.091: INFO: Found 0 / 1 May 11 22:13:57.996: INFO: Selector matched 1 pods for map[app:agnhost] May 11 22:13:57.996: INFO: Found 0 / 1 May 11 22:13:59.037: INFO: Selector matched 1 pods for map[app:agnhost] May 11 22:13:59.037: INFO: Found 0 / 1 May 11 22:13:59.938: INFO: Selector matched 1 pods for map[app:agnhost] May 11 22:13:59.938: INFO: Found 1 / 1 May 11 22:13:59.938: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods May 11 22:13:59.941: INFO: Selector matched 1 pods for map[app:agnhost] May 11 22:13:59.941: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 11 22:13:59.941: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod agnhost-master-vv2kb --namespace=kubectl-7793 -p {"metadata":{"annotations":{"x":"y"}}}' May 11 22:14:00.039: INFO: stderr: "" May 11 22:14:00.039: INFO: stdout: "pod/agnhost-master-vv2kb patched\n" STEP: checking annotations May 11 22:14:00.046: INFO: Selector matched 1 pods for map[app:agnhost] May 11 22:14:00.046: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 22:14:00.046: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7793" for this suite. • [SLOW TEST:12.023 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1432 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":278,"completed":256,"skipped":4156,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 22:14:00.054: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-2973 STEP: creating a selector STEP: Creating the service pods in kubernetes May 11 22:14:00.204: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 11 22:14:33.952: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.25:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-2973 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 11 22:14:33.952: INFO: >>> kubeConfig: /root/.kube/config I0511 22:14:33.981085 6 log.go:172] (0xc00173a9a0) (0xc001e7cd20) Create stream I0511 22:14:33.981356 6 log.go:172] (0xc00173a9a0) (0xc001e7cd20) Stream added, broadcasting: 1 I0511 22:14:33.983072 6 log.go:172] (0xc00173a9a0) Reply frame received for 1 I0511 22:14:33.983131 6 log.go:172] (0xc00173a9a0) (0xc0027f60a0) Create stream I0511 22:14:33.983156 6 log.go:172] (0xc00173a9a0) (0xc0027f60a0) Stream added, broadcasting: 3 I0511 22:14:33.984231 6 log.go:172] (0xc00173a9a0) Reply frame received for 3 I0511 22:14:33.984269 6 log.go:172] (0xc00173a9a0) (0xc00197b860) Create stream I0511 22:14:33.984285 6 log.go:172] (0xc00173a9a0) (0xc00197b860) Stream added, broadcasting: 5 I0511 22:14:33.985420 6 log.go:172] (0xc00173a9a0) Reply frame received for 5 I0511 22:14:34.039560 6 log.go:172] (0xc00173a9a0) Data frame received for 3 I0511 22:14:34.039584 6 log.go:172] (0xc0027f60a0) (3) Data frame handling I0511 22:14:34.039591 6 log.go:172] (0xc0027f60a0) (3) Data frame sent I0511 22:14:34.039599 6 log.go:172] (0xc00173a9a0) Data frame received for 5 I0511 22:14:34.039605 6 log.go:172] (0xc00197b860) (5) Data frame handling I0511 22:14:34.039624 6 log.go:172] (0xc00173a9a0) Data frame received for 3 I0511 22:14:34.039635 6 log.go:172] (0xc0027f60a0) (3) Data frame handling I0511 22:14:34.042088 6 log.go:172] (0xc00173a9a0) Data frame received for 1 I0511 22:14:34.042106 6 log.go:172] (0xc001e7cd20) (1) Data frame handling I0511 22:14:34.042129 6 log.go:172] (0xc001e7cd20) (1) Data frame sent I0511 22:14:34.042140 6 log.go:172] (0xc00173a9a0) (0xc001e7cd20) Stream removed, broadcasting: 1 I0511 22:14:34.042196 6 log.go:172] (0xc00173a9a0) (0xc001e7cd20) Stream removed, broadcasting: 1 I0511 22:14:34.042208 6 log.go:172] (0xc00173a9a0) (0xc0027f60a0) Stream removed, broadcasting: 3 I0511 22:14:34.042219 6 log.go:172] (0xc00173a9a0) (0xc00197b860) Stream removed, broadcasting: 5 May 11 22:14:34.042: INFO: Found all expected endpoints: [netserver-0] I0511 22:14:34.042330 6 log.go:172] (0xc00173a9a0) Go away received May 11 22:14:34.045: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.215:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-2973 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 11 22:14:34.045: INFO: >>> kubeConfig: /root/.kube/config I0511 22:14:34.075599 6 log.go:172] (0xc001373e40) (0xc001a99180) Create stream I0511 22:14:34.075629 6 log.go:172] (0xc001373e40) (0xc001a99180) Stream added, broadcasting: 1 I0511 22:14:34.077781 6 log.go:172] (0xc001373e40) Reply frame received for 1 I0511 22:14:34.077820 6 log.go:172] (0xc001373e40) (0xc00197ba40) Create stream I0511 22:14:34.077832 6 log.go:172] (0xc001373e40) (0xc00197ba40) Stream added, broadcasting: 3 I0511 22:14:34.078828 6 log.go:172] (0xc001373e40) Reply frame received for 3 I0511 22:14:34.078859 6 log.go:172] (0xc001373e40) (0xc001a99220) Create stream I0511 22:14:34.078869 6 log.go:172] (0xc001373e40) (0xc001a99220) Stream added, broadcasting: 5 I0511 22:14:34.079742 6 log.go:172] (0xc001373e40) Reply frame received for 5 I0511 22:14:34.153676 6 log.go:172] (0xc001373e40) Data frame received for 5 I0511 22:14:34.153693 6 log.go:172] (0xc001a99220) (5) Data frame handling I0511 22:14:34.153714 6 log.go:172] (0xc001373e40) Data frame received for 3 I0511 22:14:34.153728 6 log.go:172] (0xc00197ba40) (3) Data frame handling I0511 22:14:34.153744 6 log.go:172] (0xc00197ba40) (3) Data frame sent I0511 22:14:34.153754 6 log.go:172] (0xc001373e40) Data frame received for 3 I0511 22:14:34.153767 6 log.go:172] (0xc00197ba40) (3) Data frame handling I0511 22:14:34.154932 6 log.go:172] (0xc001373e40) Data frame received for 1 I0511 22:14:34.154953 6 log.go:172] (0xc001a99180) (1) Data frame handling I0511 22:14:34.154966 6 log.go:172] (0xc001a99180) (1) Data frame sent I0511 22:14:34.154975 6 log.go:172] (0xc001373e40) (0xc001a99180) Stream removed, broadcasting: 1 I0511 22:14:34.154988 6 log.go:172] (0xc001373e40) Go away received I0511 22:14:34.155098 6 log.go:172] (0xc001373e40) (0xc001a99180) Stream removed, broadcasting: 1 I0511 22:14:34.155130 6 log.go:172] (0xc001373e40) (0xc00197ba40) Stream removed, broadcasting: 3 I0511 22:14:34.155141 6 log.go:172] (0xc001373e40) (0xc001a99220) Stream removed, broadcasting: 5 May 11 22:14:34.155: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 22:14:34.155: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-2973" for this suite. • [SLOW TEST:34.106 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":257,"skipped":4179,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 22:14:34.161: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 22:14:52.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7351" for this suite. • [SLOW TEST:18.211 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":278,"completed":258,"skipped":4232,"failed":0} S ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 22:14:52.373: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 11 22:14:52.553: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota May 11 22:14:55.988: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 22:14:56.745: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-4447" for this suite. •{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":278,"completed":259,"skipped":4233,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 22:14:57.074: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 11 22:14:59.727: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 11 22:15:02.006: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724832099, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724832099, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724832099, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724832099, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 22:15:04.422: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724832099, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724832099, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724832099, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724832099, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 22:15:06.365: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724832099, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724832099, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724832099, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724832099, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 22:15:08.942: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724832099, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724832099, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724832099, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724832099, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 22:15:10.010: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724832099, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724832099, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724832099, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724832099, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 11 22:15:13.067: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook May 11 22:15:13.090: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 22:15:13.224: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4843" for this suite. STEP: Destroying namespace "webhook-4843-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:17.747 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":278,"completed":260,"skipped":4244,"failed":0} SSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 22:15:14.821: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:46 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: setting up selector STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes May 11 22:15:21.356: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0' STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice May 11 22:15:26.453: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 22:15:26.456: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9488" for this suite. • [SLOW TEST:11.643 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance]","total":278,"completed":261,"skipped":4248,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 22:15:26.464: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-2400 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-2400 STEP: Creating statefulset with conflicting port in namespace statefulset-2400 STEP: Waiting until pod test-pod will start running in namespace statefulset-2400 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-2400 May 11 22:15:34.583: INFO: Observed stateful pod in namespace: statefulset-2400, name: ss-0, uid: 4b4045be-a16a-48db-abfd-2f8861f6aa70, status phase: Pending. Waiting for statefulset controller to delete. May 11 22:15:34.628: INFO: Observed stateful pod in namespace: statefulset-2400, name: ss-0, uid: 4b4045be-a16a-48db-abfd-2f8861f6aa70, status phase: Failed. Waiting for statefulset controller to delete. May 11 22:15:34.636: INFO: Observed stateful pod in namespace: statefulset-2400, name: ss-0, uid: 4b4045be-a16a-48db-abfd-2f8861f6aa70, status phase: Failed. Waiting for statefulset controller to delete. May 11 22:15:34.641: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-2400 STEP: Removing pod with conflicting port in namespace statefulset-2400 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-2400 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 May 11 22:15:43.191: INFO: Deleting all statefulset in ns statefulset-2400 May 11 22:15:43.194: INFO: Scaling statefulset ss to 0 May 11 22:15:53.213: INFO: Waiting for statefulset status.replicas updated to 0 May 11 22:15:53.215: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 22:15:53.266: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-2400" for this suite. • [SLOW TEST:26.808 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":278,"completed":262,"skipped":4292,"failed":0} SS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 22:15:53.273: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 11 22:15:53.455: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. May 11 22:15:53.536: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 22:15:53.539: INFO: Number of nodes with available pods: 0 May 11 22:15:53.539: INFO: Node jerma-worker is running more than one daemon pod May 11 22:15:54.543: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 22:15:54.546: INFO: Number of nodes with available pods: 0 May 11 22:15:54.546: INFO: Node jerma-worker is running more than one daemon pod May 11 22:15:55.566: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 22:15:55.569: INFO: Number of nodes with available pods: 0 May 11 22:15:55.569: INFO: Node jerma-worker is running more than one daemon pod May 11 22:15:56.544: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 22:15:56.547: INFO: Number of nodes with available pods: 0 May 11 22:15:56.547: INFO: Node jerma-worker is running more than one daemon pod May 11 22:15:57.591: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 22:15:57.594: INFO: Number of nodes with available pods: 0 May 11 22:15:57.594: INFO: Node jerma-worker is running more than one daemon pod May 11 22:15:58.576: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 22:15:58.581: INFO: Number of nodes with available pods: 2 May 11 22:15:58.581: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. May 11 22:15:58.904: INFO: Wrong image for pod: daemon-set-7f6vp. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 11 22:15:58.904: INFO: Wrong image for pod: daemon-set-pwqt7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 11 22:15:58.965: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 22:15:59.969: INFO: Wrong image for pod: daemon-set-7f6vp. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 11 22:15:59.969: INFO: Wrong image for pod: daemon-set-pwqt7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 11 22:15:59.972: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 22:16:01.492: INFO: Wrong image for pod: daemon-set-7f6vp. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 11 22:16:01.492: INFO: Wrong image for pod: daemon-set-pwqt7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 11 22:16:01.498: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 22:16:01.998: INFO: Wrong image for pod: daemon-set-7f6vp. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 11 22:16:01.998: INFO: Wrong image for pod: daemon-set-pwqt7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 11 22:16:02.244: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 22:16:02.969: INFO: Wrong image for pod: daemon-set-7f6vp. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 11 22:16:02.969: INFO: Wrong image for pod: daemon-set-pwqt7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 11 22:16:02.973: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 22:16:03.976: INFO: Wrong image for pod: daemon-set-7f6vp. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 11 22:16:03.976: INFO: Wrong image for pod: daemon-set-pwqt7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 11 22:16:03.976: INFO: Pod daemon-set-pwqt7 is not available May 11 22:16:03.979: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 22:16:04.968: INFO: Wrong image for pod: daemon-set-7f6vp. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 11 22:16:04.968: INFO: Wrong image for pod: daemon-set-pwqt7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 11 22:16:04.968: INFO: Pod daemon-set-pwqt7 is not available May 11 22:16:04.972: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 22:16:06.074: INFO: Wrong image for pod: daemon-set-7f6vp. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 11 22:16:06.074: INFO: Wrong image for pod: daemon-set-pwqt7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 11 22:16:06.074: INFO: Pod daemon-set-pwqt7 is not available May 11 22:16:06.079: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 22:16:06.969: INFO: Wrong image for pod: daemon-set-7f6vp. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 11 22:16:06.969: INFO: Wrong image for pod: daemon-set-pwqt7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 11 22:16:06.969: INFO: Pod daemon-set-pwqt7 is not available May 11 22:16:06.972: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 22:16:08.015: INFO: Wrong image for pod: daemon-set-7f6vp. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 11 22:16:08.015: INFO: Wrong image for pod: daemon-set-pwqt7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 11 22:16:08.015: INFO: Pod daemon-set-pwqt7 is not available May 11 22:16:08.018: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 22:16:08.969: INFO: Wrong image for pod: daemon-set-7f6vp. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 11 22:16:08.969: INFO: Wrong image for pod: daemon-set-pwqt7. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 11 22:16:08.969: INFO: Pod daemon-set-pwqt7 is not available May 11 22:16:08.992: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 22:16:09.987: INFO: Pod daemon-set-5wgnz is not available May 11 22:16:09.987: INFO: Wrong image for pod: daemon-set-7f6vp. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 11 22:16:10.477: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 22:16:10.995: INFO: Pod daemon-set-5wgnz is not available May 11 22:16:10.995: INFO: Wrong image for pod: daemon-set-7f6vp. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 11 22:16:10.999: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 22:16:11.969: INFO: Pod daemon-set-5wgnz is not available May 11 22:16:11.969: INFO: Wrong image for pod: daemon-set-7f6vp. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 11 22:16:12.027: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 22:16:12.968: INFO: Pod daemon-set-5wgnz is not available May 11 22:16:12.968: INFO: Wrong image for pod: daemon-set-7f6vp. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 11 22:16:12.970: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 22:16:13.968: INFO: Pod daemon-set-5wgnz is not available May 11 22:16:13.968: INFO: Wrong image for pod: daemon-set-7f6vp. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 11 22:16:13.970: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 22:16:14.989: INFO: Wrong image for pod: daemon-set-7f6vp. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 11 22:16:15.002: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 22:16:15.968: INFO: Wrong image for pod: daemon-set-7f6vp. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 11 22:16:15.972: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 22:16:16.969: INFO: Wrong image for pod: daemon-set-7f6vp. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 11 22:16:16.969: INFO: Pod daemon-set-7f6vp is not available May 11 22:16:16.973: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 22:16:17.987: INFO: Pod daemon-set-rhzlj is not available May 11 22:16:17.991: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. May 11 22:16:17.994: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 22:16:17.997: INFO: Number of nodes with available pods: 1 May 11 22:16:17.997: INFO: Node jerma-worker2 is running more than one daemon pod May 11 22:16:19.001: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 22:16:19.005: INFO: Number of nodes with available pods: 1 May 11 22:16:19.005: INFO: Node jerma-worker2 is running more than one daemon pod May 11 22:16:20.323: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 22:16:20.327: INFO: Number of nodes with available pods: 1 May 11 22:16:20.327: INFO: Node jerma-worker2 is running more than one daemon pod May 11 22:16:21.002: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 22:16:21.005: INFO: Number of nodes with available pods: 1 May 11 22:16:21.005: INFO: Node jerma-worker2 is running more than one daemon pod May 11 22:16:22.082: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 22:16:22.085: INFO: Number of nodes with available pods: 1 May 11 22:16:22.085: INFO: Node jerma-worker2 is running more than one daemon pod May 11 22:16:23.000: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 22:16:23.003: INFO: Number of nodes with available pods: 1 May 11 22:16:23.003: INFO: Node jerma-worker2 is running more than one daemon pod May 11 22:16:24.001: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 11 22:16:24.003: INFO: Number of nodes with available pods: 2 May 11 22:16:24.003: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5881, will wait for the garbage collector to delete the pods May 11 22:16:24.278: INFO: Deleting DaemonSet.extensions daemon-set took: 5.133856ms May 11 22:16:24.678: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.237232ms May 11 22:16:28.180: INFO: Number of nodes with available pods: 0 May 11 22:16:28.180: INFO: Number of running nodes: 0, number of available pods: 0 May 11 22:16:28.182: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5881/daemonsets","resourceVersion":"15373195"},"items":null} May 11 22:16:28.184: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5881/pods","resourceVersion":"15373195"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 22:16:28.221: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-5881" for this suite. • [SLOW TEST:34.955 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":278,"completed":263,"skipped":4294,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 22:16:28.228: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook May 11 22:16:38.420: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 11 22:16:38.432: INFO: Pod pod-with-poststart-http-hook still exists May 11 22:16:40.432: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 11 22:16:40.436: INFO: Pod pod-with-poststart-http-hook still exists May 11 22:16:42.432: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 11 22:16:42.439: INFO: Pod pod-with-poststart-http-hook still exists May 11 22:16:44.432: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 11 22:16:44.436: INFO: Pod pod-with-poststart-http-hook still exists May 11 22:16:46.432: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 11 22:16:46.436: INFO: Pod pod-with-poststart-http-hook still exists May 11 22:16:48.432: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 11 22:16:48.471: INFO: Pod pod-with-poststart-http-hook still exists May 11 22:16:50.432: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 11 22:16:50.435: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 22:16:50.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-51" for this suite. • [SLOW TEST:22.213 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":278,"completed":264,"skipped":4303,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 22:16:50.441: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 11 22:16:50.490: INFO: Creating deployment "webserver-deployment" May 11 22:16:50.505: INFO: Waiting for observed generation 1 May 11 22:16:53.021: INFO: Waiting for all required pods to come up May 11 22:16:53.318: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running May 11 22:17:07.718: INFO: Waiting for deployment "webserver-deployment" to complete May 11 22:17:07.723: INFO: Updating deployment "webserver-deployment" with a non-existent image May 11 22:17:07.726: INFO: Updating deployment webserver-deployment May 11 22:17:07.726: INFO: Waiting for observed generation 2 May 11 22:17:09.740: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 May 11 22:17:09.742: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 May 11 22:17:09.920: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas May 11 22:17:09.960: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 May 11 22:17:09.960: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 May 11 22:17:09.963: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas May 11 22:17:09.967: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas May 11 22:17:09.967: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 May 11 22:17:09.972: INFO: Updating deployment webserver-deployment May 11 22:17:09.972: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas May 11 22:17:10.210: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 May 11 22:17:10.884: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 May 11 22:17:12.025: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-5728 /apis/apps/v1/namespaces/deployment-5728/deployments/webserver-deployment 83578c16-05f4-448f-9249-df32c8d1195e 15373571 3 2020-05-11 22:16:50 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003562d28 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-c7997dcc8" is progressing.,LastUpdateTime:2020-05-11 22:17:08 +0000 UTC,LastTransitionTime:2020-05-11 22:16:50 +0000 UTC,},DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-05-11 22:17:10 +0000 UTC,LastTransitionTime:2020-05-11 22:17:10 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} May 11 22:17:13.418: INFO: New ReplicaSet "webserver-deployment-c7997dcc8" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-c7997dcc8 deployment-5728 /apis/apps/v1/namespaces/deployment-5728/replicasets/webserver-deployment-c7997dcc8 e9a27ee7-470f-454c-818b-043c099f57ee 15373614 3 2020-05-11 22:17:07 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 83578c16-05f4-448f-9249-df32c8d1195e 0xc003563207 0xc003563208}] [] []},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: c7997dcc8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003563278 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 11 22:17:13.418: INFO: All old ReplicaSets of Deployment "webserver-deployment": May 11 22:17:13.418: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-595b5b9587 deployment-5728 /apis/apps/v1/namespaces/deployment-5728/replicasets/webserver-deployment-595b5b9587 89620cbb-774e-4e93-9597-cbec8b39d92e 15373603 3 2020-05-11 22:16:50 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 83578c16-05f4-448f-9249-df32c8d1195e 0xc003563137 0xc003563138}] [] []},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 595b5b9587,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003563198 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} May 11 22:17:14.094: INFO: Pod "webserver-deployment-595b5b9587-2xqwn" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-2xqwn webserver-deployment-595b5b9587- deployment-5728 /api/v1/namespaces/deployment-5728/pods/webserver-deployment-595b5b9587-2xqwn c0bbe83a-3742-4de2-8055-32f35ed1dc1b 15373597 0 2020-05-11 22:17:10 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 89620cbb-774e-4e93-9597-cbec8b39d92e 0xc003563797 0xc003563798}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n6ghv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n6ghv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n6ghv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 22:17:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 11 22:17:14.095: INFO: Pod "webserver-deployment-595b5b9587-5f7jl" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-5f7jl webserver-deployment-595b5b9587- deployment-5728 /api/v1/namespaces/deployment-5728/pods/webserver-deployment-595b5b9587-5f7jl beb8a7e5-6ee2-4080-97f6-6cc403f7ff05 15373468 0 2020-05-11 22:16:50 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 89620cbb-774e-4e93-9597-cbec8b39d92e 0xc0035638b7 0xc0035638b8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n6ghv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n6ghv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n6ghv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 22:16:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 22:17:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 22:17:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 22:16:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.225,StartTime:2020-05-11 22:16:50 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-11 22:17:05 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://591b048de504acc0680509fff21ce648629549470b93d5b81c905d2fe4388304,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.225,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 11 22:17:14.095: INFO: Pod "webserver-deployment-595b5b9587-68kh8" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-68kh8 webserver-deployment-595b5b9587- deployment-5728 /api/v1/namespaces/deployment-5728/pods/webserver-deployment-595b5b9587-68kh8 6f0624f2-1d0b-4394-8e90-3e8281347fd1 15373627 0 2020-05-11 22:17:10 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 89620cbb-774e-4e93-9597-cbec8b39d92e 0xc003563a37 0xc003563a38}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n6ghv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n6ghv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n6ghv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 22:17:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 22:17:11 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 22:17:11 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 22:17:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-05-11 22:17:11 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 11 22:17:14.095: INFO: Pod "webserver-deployment-595b5b9587-78qmz" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-78qmz webserver-deployment-595b5b9587- deployment-5728 /api/v1/namespaces/deployment-5728/pods/webserver-deployment-595b5b9587-78qmz 85e6ae5d-1ef7-4733-96ec-e14e92df8616 15373593 0 2020-05-11 22:17:10 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 89620cbb-774e-4e93-9597-cbec8b39d92e 0xc003563ba7 0xc003563ba8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n6ghv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n6ghv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n6ghv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 22:17:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 11 22:17:14.095: INFO: Pod "webserver-deployment-595b5b9587-94gpp" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-94gpp webserver-deployment-595b5b9587- deployment-5728 /api/v1/namespaces/deployment-5728/pods/webserver-deployment-595b5b9587-94gpp 697e0a70-2c9e-4661-b0d5-3e9acff70883 15373580 0 2020-05-11 22:17:10 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 89620cbb-774e-4e93-9597-cbec8b39d92e 0xc003563cc7 0xc003563cc8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n6ghv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n6ghv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n6ghv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 22:17:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 11 22:17:14.096: INFO: Pod "webserver-deployment-595b5b9587-9dqs9" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-9dqs9 webserver-deployment-595b5b9587- deployment-5728 /api/v1/namespaces/deployment-5728/pods/webserver-deployment-595b5b9587-9dqs9 e85400ef-4517-43dd-97a2-01db3aa0bdc1 15373585 0 2020-05-11 22:17:10 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 89620cbb-774e-4e93-9597-cbec8b39d92e 0xc003563de7 0xc003563de8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n6ghv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n6ghv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n6ghv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 22:17:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 11 22:17:14.096: INFO: Pod "webserver-deployment-595b5b9587-9mnhw" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-9mnhw webserver-deployment-595b5b9587- deployment-5728 /api/v1/namespaces/deployment-5728/pods/webserver-deployment-595b5b9587-9mnhw 6fef7fa2-c25b-4b55-86e6-fa1b280e363e 15373453 0 2020-05-11 22:16:50 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 89620cbb-774e-4e93-9597-cbec8b39d92e 0xc003563f07 0xc003563f08}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n6ghv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n6ghv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n6ghv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 22:16:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 22:17:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 22:17:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 22:16:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.36,StartTime:2020-05-11 22:16:50 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-11 22:17:04 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://8c1310a02b0074c9d74d8872f8483ffdf534c3e293b60adfcabe16f8b901a7ae,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.36,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 11 22:17:14.096: INFO: Pod "webserver-deployment-595b5b9587-bcjzr" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-bcjzr webserver-deployment-595b5b9587- deployment-5728 /api/v1/namespaces/deployment-5728/pods/webserver-deployment-595b5b9587-bcjzr f3cd2fda-13b6-4b0e-9a81-ca2d7439a2df 15373592 0 2020-05-11 22:17:10 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 89620cbb-774e-4e93-9597-cbec8b39d92e 0xc003026457 0xc003026458}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n6ghv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n6ghv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n6ghv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 22:17:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 11 22:17:14.096: INFO: Pod "webserver-deployment-595b5b9587-c5g4h" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-c5g4h webserver-deployment-595b5b9587- deployment-5728 /api/v1/namespaces/deployment-5728/pods/webserver-deployment-595b5b9587-c5g4h c27d8ecc-0490-4d79-adbc-b007096f1707 15373562 0 2020-05-11 22:17:10 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 89620cbb-774e-4e93-9597-cbec8b39d92e 0xc003026587 0xc003026588}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n6ghv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n6ghv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n6ghv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 22:17:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 11 22:17:14.096: INFO: Pod "webserver-deployment-595b5b9587-cvrl8" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-cvrl8 webserver-deployment-595b5b9587- deployment-5728 /api/v1/namespaces/deployment-5728/pods/webserver-deployment-595b5b9587-cvrl8 e6b9c242-b3ec-42b1-bba6-14f9851cdb48 15373403 0 2020-05-11 22:16:50 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 89620cbb-774e-4e93-9597-cbec8b39d92e 0xc0030266a7 0xc0030266a8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n6ghv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n6ghv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n6ghv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 22:16:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 22:16:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 22:16:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 22:16:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.34,StartTime:2020-05-11 22:16:50 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-11 22:16:56 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://c703826e2003a9c72d897f57d2e09d30c2e53e85c59d62399a3c2d40ab6060d0,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.34,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 11 22:17:14.097: INFO: Pod "webserver-deployment-595b5b9587-dfw9d" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-dfw9d webserver-deployment-595b5b9587- deployment-5728 /api/v1/namespaces/deployment-5728/pods/webserver-deployment-595b5b9587-dfw9d 14b58c50-e6a2-4b06-82fb-9779e4a79e86 15373583 0 2020-05-11 22:17:10 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 89620cbb-774e-4e93-9597-cbec8b39d92e 0xc003026837 0xc003026838}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n6ghv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n6ghv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n6ghv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 22:17:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 11 22:17:14.097: INFO: Pod "webserver-deployment-595b5b9587-djjhk" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-djjhk webserver-deployment-595b5b9587- deployment-5728 /api/v1/namespaces/deployment-5728/pods/webserver-deployment-595b5b9587-djjhk 1e8ac888-ba02-4c72-ab1f-9183ee614fd8 15373590 0 2020-05-11 22:17:10 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 89620cbb-774e-4e93-9597-cbec8b39d92e 0xc003026977 0xc003026978}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n6ghv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n6ghv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n6ghv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 22:17:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 11 22:17:14.097: INFO: Pod "webserver-deployment-595b5b9587-hg7l4" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-hg7l4 webserver-deployment-595b5b9587- deployment-5728 /api/v1/namespaces/deployment-5728/pods/webserver-deployment-595b5b9587-hg7l4 7167ae00-8789-4895-8bf9-1390ed5825fd 15373573 0 2020-05-11 22:17:10 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 89620cbb-774e-4e93-9597-cbec8b39d92e 0xc003026a97 0xc003026a98}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n6ghv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n6ghv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n6ghv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 22:17:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 11 22:17:14.097: INFO: Pod "webserver-deployment-595b5b9587-plkmj" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-plkmj webserver-deployment-595b5b9587- deployment-5728 /api/v1/namespaces/deployment-5728/pods/webserver-deployment-595b5b9587-plkmj b073db07-75a0-4796-924b-81b8811bc1d0 15373474 0 2020-05-11 22:16:50 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 89620cbb-774e-4e93-9597-cbec8b39d92e 0xc003026bb7 0xc003026bb8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n6ghv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n6ghv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n6ghv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 22:16:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 22:17:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 22:17:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 22:16:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.37,StartTime:2020-05-11 22:16:50 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-11 22:17:05 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://16278abc5615a61fece5fee9e0027dcc98b83756212035fb68867cd59562f0da,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.37,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 11 22:17:14.098: INFO: Pod "webserver-deployment-595b5b9587-tjfv9" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-tjfv9 webserver-deployment-595b5b9587- deployment-5728 /api/v1/namespaces/deployment-5728/pods/webserver-deployment-595b5b9587-tjfv9 46481e43-144f-4c1a-9dd9-7e7151b6f36d 15373471 0 2020-05-11 22:16:50 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 89620cbb-774e-4e93-9597-cbec8b39d92e 0xc003026d57 0xc003026d58}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n6ghv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n6ghv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n6ghv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 22:16:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 22:17:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 22:17:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 22:16:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.38,StartTime:2020-05-11 22:16:50 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-11 22:17:05 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://7ec6b0f4e92b655f4d68875aee8e9d2a7f36b519bbcf074fc60bf35e258a962a,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.38,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 11 22:17:14.098: INFO: Pod "webserver-deployment-595b5b9587-twhd6" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-twhd6 webserver-deployment-595b5b9587- deployment-5728 /api/v1/namespaces/deployment-5728/pods/webserver-deployment-595b5b9587-twhd6 2417c492-af5a-49ea-aced-fddf666b6606 15373598 0 2020-05-11 22:17:10 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 89620cbb-774e-4e93-9597-cbec8b39d92e 0xc003026ed7 0xc003026ed8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n6ghv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n6ghv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n6ghv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 22:17:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 11 22:17:14.098: INFO: Pod "webserver-deployment-595b5b9587-vlmmp" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-vlmmp webserver-deployment-595b5b9587- deployment-5728 /api/v1/namespaces/deployment-5728/pods/webserver-deployment-595b5b9587-vlmmp ca69885f-5d33-4cdb-afae-af72b3fe497f 15373439 0 2020-05-11 22:16:50 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 89620cbb-774e-4e93-9597-cbec8b39d92e 0xc003026ff7 0xc003026ff8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n6ghv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n6ghv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n6ghv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 22:16:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 22:17:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 22:17:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 22:16:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.223,StartTime:2020-05-11 22:16:50 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-11 22:17:03 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://7cb4e682f6e9fe33ca0bbba6e7f82a7066113ae7565838dd6c89de2c21174d95,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.223,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 11 22:17:14.098: INFO: Pod "webserver-deployment-595b5b9587-w4hfk" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-w4hfk webserver-deployment-595b5b9587- deployment-5728 /api/v1/namespaces/deployment-5728/pods/webserver-deployment-595b5b9587-w4hfk 6567a480-fac6-48d8-bc63-679a17fbf21f 15373423 0 2020-05-11 22:16:50 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 89620cbb-774e-4e93-9597-cbec8b39d92e 0xc003027177 0xc003027178}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n6ghv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n6ghv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n6ghv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 22:16:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 22:17:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 22:17:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 22:16:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.35,StartTime:2020-05-11 22:16:50 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-11 22:16:59 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://944f28a43248792eebaca1317fea8fb45e02aa8414272e60cf8bdd77a363a4a2,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.35,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 11 22:17:14.099: INFO: Pod "webserver-deployment-595b5b9587-xbx5n" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-xbx5n webserver-deployment-595b5b9587- deployment-5728 /api/v1/namespaces/deployment-5728/pods/webserver-deployment-595b5b9587-xbx5n c2d53014-9f30-444d-a7f6-27b76377db5a 15373451 0 2020-05-11 22:16:50 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 89620cbb-774e-4e93-9597-cbec8b39d92e 0xc0030272f7 0xc0030272f8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n6ghv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n6ghv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n6ghv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 22:16:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 22:17:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 22:17:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 22:16:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.224,StartTime:2020-05-11 22:16:50 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-11 22:17:04 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://23695567bdbe1c30ffe45eba214f101e9e98bfce3e9b6b7093cdd806ffead463,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.224,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 11 22:17:14.099: INFO: Pod "webserver-deployment-595b5b9587-xs7z8" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-xs7z8 webserver-deployment-595b5b9587- deployment-5728 /api/v1/namespaces/deployment-5728/pods/webserver-deployment-595b5b9587-xs7z8 e7f872f6-f585-4096-9755-eb5114987751 15373606 0 2020-05-11 22:17:10 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 89620cbb-774e-4e93-9597-cbec8b39d92e 0xc0030277c7 0xc0030277c8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n6ghv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n6ghv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n6ghv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 22:17:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 22:17:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 22:17:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 22:17:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-05-11 22:17:10 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 11 22:17:14.099: INFO: Pod "webserver-deployment-c7997dcc8-498p9" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-498p9 webserver-deployment-c7997dcc8- deployment-5728 /api/v1/namespaces/deployment-5728/pods/webserver-deployment-c7997dcc8-498p9 770a08d6-a79f-458b-844c-09f55cd15c9f 15373540 0 2020-05-11 22:17:07 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 e9a27ee7-470f-454c-818b-043c099f57ee 0xc002b04107 0xc002b04108}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n6ghv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n6ghv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n6ghv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 22:17:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 22:17:08 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 22:17:08 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 22:17:07 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-05-11 22:17:08 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 11 22:17:14.099: INFO: Pod "webserver-deployment-c7997dcc8-7gq6f" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-7gq6f webserver-deployment-c7997dcc8- deployment-5728 /api/v1/namespaces/deployment-5728/pods/webserver-deployment-c7997dcc8-7gq6f 49df66e9-0929-45b7-89a5-60a0b19c836a 15373601 0 2020-05-11 22:17:10 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 e9a27ee7-470f-454c-818b-043c099f57ee 0xc002b04297 0xc002b04298}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n6ghv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n6ghv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n6ghv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 22:17:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 11 22:17:14.099: INFO: Pod "webserver-deployment-c7997dcc8-92sjk" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-92sjk webserver-deployment-c7997dcc8- deployment-5728 /api/v1/namespaces/deployment-5728/pods/webserver-deployment-c7997dcc8-92sjk 47674c62-db09-4027-8933-171af1d2ce7a 15373587 0 2020-05-11 22:17:10 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 e9a27ee7-470f-454c-818b-043c099f57ee 0xc002b043e7 0xc002b043e8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n6ghv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n6ghv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n6ghv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 22:17:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 11 22:17:14.099: INFO: Pod "webserver-deployment-c7997dcc8-bv826" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-bv826 webserver-deployment-c7997dcc8- deployment-5728 /api/v1/namespaces/deployment-5728/pods/webserver-deployment-c7997dcc8-bv826 92e5c63b-d192-4f89-a505-2d9c91868365 15373537 0 2020-05-11 22:17:07 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 e9a27ee7-470f-454c-818b-043c099f57ee 0xc002b04517 0xc002b04518}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n6ghv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n6ghv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n6ghv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 22:17:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 22:17:08 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 22:17:08 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 22:17:07 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-05-11 22:17:08 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 11 22:17:14.100: INFO: Pod "webserver-deployment-c7997dcc8-h5mt2" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-h5mt2 webserver-deployment-c7997dcc8- deployment-5728 /api/v1/namespaces/deployment-5728/pods/webserver-deployment-c7997dcc8-h5mt2 1f038ca7-4b48-4076-9400-fb5980ce747c 15373595 0 2020-05-11 22:17:10 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 e9a27ee7-470f-454c-818b-043c099f57ee 0xc002b046c7 0xc002b046c8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n6ghv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n6ghv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n6ghv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 22:17:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 11 22:17:14.100: INFO: Pod "webserver-deployment-c7997dcc8-hsc6d" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-hsc6d webserver-deployment-c7997dcc8- deployment-5728 /api/v1/namespaces/deployment-5728/pods/webserver-deployment-c7997dcc8-hsc6d 4f141d25-e70d-489b-b23c-a64d4093bcfd 15373563 0 2020-05-11 22:17:10 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 e9a27ee7-470f-454c-818b-043c099f57ee 0xc002b047f7 0xc002b047f8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n6ghv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n6ghv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n6ghv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 22:17:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 11 22:17:14.100: INFO: Pod "webserver-deployment-c7997dcc8-k4z7x" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-k4z7x webserver-deployment-c7997dcc8- deployment-5728 /api/v1/namespaces/deployment-5728/pods/webserver-deployment-c7997dcc8-k4z7x 1155cf1e-7f97-4bc3-8004-e7b0e64092da 15373594 0 2020-05-11 22:17:10 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 e9a27ee7-470f-454c-818b-043c099f57ee 0xc002b04937 0xc002b04938}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n6ghv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n6ghv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n6ghv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 22:17:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 11 22:17:14.100: INFO: Pod "webserver-deployment-c7997dcc8-nhqpt" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-nhqpt webserver-deployment-c7997dcc8- deployment-5728 /api/v1/namespaces/deployment-5728/pods/webserver-deployment-c7997dcc8-nhqpt 3b97fe2d-1b06-4997-ab83-ab46f26af7bd 15373586 0 2020-05-11 22:17:10 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 e9a27ee7-470f-454c-818b-043c099f57ee 0xc002b04a67 0xc002b04a68}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n6ghv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n6ghv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n6ghv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 22:17:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 11 22:17:14.100: INFO: Pod "webserver-deployment-c7997dcc8-npktq" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-npktq webserver-deployment-c7997dcc8- deployment-5728 /api/v1/namespaces/deployment-5728/pods/webserver-deployment-c7997dcc8-npktq 4afb87ea-c1d3-4f5c-8e06-757b83577dd7 15373596 0 2020-05-11 22:17:10 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 e9a27ee7-470f-454c-818b-043c099f57ee 0xc002b04b97 0xc002b04b98}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n6ghv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n6ghv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n6ghv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 22:17:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 11 22:17:14.100: INFO: Pod "webserver-deployment-c7997dcc8-q6qlw" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-q6qlw webserver-deployment-c7997dcc8- deployment-5728 /api/v1/namespaces/deployment-5728/pods/webserver-deployment-c7997dcc8-q6qlw 47ecfc40-1d8f-4fe7-bd00-0d432b36f40e 15373516 0 2020-05-11 22:17:07 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 e9a27ee7-470f-454c-818b-043c099f57ee 0xc002b04cc7 0xc002b04cc8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n6ghv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n6ghv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n6ghv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 22:17:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 22:17:07 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 22:17:07 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 22:17:07 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-05-11 22:17:07 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 11 22:17:14.101: INFO: Pod "webserver-deployment-c7997dcc8-r49bs" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-r49bs webserver-deployment-c7997dcc8- deployment-5728 /api/v1/namespaces/deployment-5728/pods/webserver-deployment-c7997dcc8-r49bs 86929cd4-7876-4370-86b7-018ef44163e1 15373615 0 2020-05-11 22:17:07 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 e9a27ee7-470f-454c-818b-043c099f57ee 0xc002b04e47 0xc002b04e48}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n6ghv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n6ghv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n6ghv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 22:17:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 22:17:07 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 22:17:07 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 22:17:07 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.39,StartTime:2020-05-11 22:17:07 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.39,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 11 22:17:14.101: INFO: Pod "webserver-deployment-c7997dcc8-v6zvr" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-v6zvr webserver-deployment-c7997dcc8- deployment-5728 /api/v1/namespaces/deployment-5728/pods/webserver-deployment-c7997dcc8-v6zvr e7ff1e48-7047-4568-974a-c11ef866b97b 15373532 0 2020-05-11 22:17:07 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 e9a27ee7-470f-454c-818b-043c099f57ee 0xc002b05007 0xc002b05008}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n6ghv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n6ghv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n6ghv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 22:17:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 22:17:07 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 22:17:07 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 22:17:07 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-05-11 22:17:07 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 11 22:17:14.101: INFO: Pod "webserver-deployment-c7997dcc8-vqltq" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-vqltq webserver-deployment-c7997dcc8- deployment-5728 /api/v1/namespaces/deployment-5728/pods/webserver-deployment-c7997dcc8-vqltq 587d0c25-1f50-49e8-953e-695a05847683 15373609 0 2020-05-11 22:17:11 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 e9a27ee7-470f-454c-818b-043c099f57ee 0xc002b05187 0xc002b05188}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-n6ghv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-n6ghv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-n6ghv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 22:17:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 22:17:14.101: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-5728" for this suite. • [SLOW TEST:24.335 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":278,"completed":265,"skipped":4331,"failed":0} SSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 22:17:14.777: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 11 22:17:21.396: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 11 22:17:24.795: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724832241, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724832241, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724832242, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724832240, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 22:17:27.213: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724832241, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724832241, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724832242, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724832240, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 22:17:29.462: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724832241, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724832241, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724832242, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724832240, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 22:17:31.613: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724832241, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724832241, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724832242, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724832240, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 22:17:33.388: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724832241, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724832241, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724832242, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724832240, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 22:17:35.003: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724832241, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724832241, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724832242, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724832240, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 22:17:36.910: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724832241, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724832241, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724832242, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724832240, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 11 22:17:40.595: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 22:17:57.957: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9372" for this suite. STEP: Destroying namespace "webhook-9372-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:46.641 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":278,"completed":266,"skipped":4335,"failed":0} SS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 22:18:01.418: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications May 11 22:18:02.777: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-2163 /api/v1/namespaces/watch-2163/configmaps/e2e-watch-test-watch-closed 0b73b7b4-b14b-4bb0-84bb-d8ff18114a9b 15373999 0 2020-05-11 22:18:02 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} May 11 22:18:02.777: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-2163 /api/v1/namespaces/watch-2163/configmaps/e2e-watch-test-watch-closed 0b73b7b4-b14b-4bb0-84bb-d8ff18114a9b 15374002 0 2020-05-11 22:18:02 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed May 11 22:18:03.583: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-2163 /api/v1/namespaces/watch-2163/configmaps/e2e-watch-test-watch-closed 0b73b7b4-b14b-4bb0-84bb-d8ff18114a9b 15374005 0 2020-05-11 22:18:02 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 11 22:18:03.583: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-2163 /api/v1/namespaces/watch-2163/configmaps/e2e-watch-test-watch-closed 0b73b7b4-b14b-4bb0-84bb-d8ff18114a9b 15374007 0 2020-05-11 22:18:02 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 22:18:03.583: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-2163" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":278,"completed":267,"skipped":4337,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 22:18:03.847: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod May 11 22:18:05.115: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 22:18:19.821: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-2268" for this suite. • [SLOW TEST:16.210 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":278,"completed":268,"skipped":4357,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 22:18:20.057: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir volume type on node default medium May 11 22:18:21.240: INFO: Waiting up to 5m0s for pod "pod-cca1b90b-2790-443f-9aa7-32f16828b121" in namespace "emptydir-9632" to be "success or failure" May 11 22:18:21.478: INFO: Pod "pod-cca1b90b-2790-443f-9aa7-32f16828b121": Phase="Pending", Reason="", readiness=false. Elapsed: 237.547001ms May 11 22:18:23.495: INFO: Pod "pod-cca1b90b-2790-443f-9aa7-32f16828b121": Phase="Pending", Reason="", readiness=false. Elapsed: 2.254928781s May 11 22:18:25.670: INFO: Pod "pod-cca1b90b-2790-443f-9aa7-32f16828b121": Phase="Pending", Reason="", readiness=false. Elapsed: 4.429490221s May 11 22:18:27.723: INFO: Pod "pod-cca1b90b-2790-443f-9aa7-32f16828b121": Phase="Running", Reason="", readiness=true. Elapsed: 6.482565604s May 11 22:18:29.727: INFO: Pod "pod-cca1b90b-2790-443f-9aa7-32f16828b121": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.486512602s STEP: Saw pod success May 11 22:18:29.727: INFO: Pod "pod-cca1b90b-2790-443f-9aa7-32f16828b121" satisfied condition "success or failure" May 11 22:18:29.729: INFO: Trying to get logs from node jerma-worker2 pod pod-cca1b90b-2790-443f-9aa7-32f16828b121 container test-container: STEP: delete the pod May 11 22:18:29.774: INFO: Waiting for pod pod-cca1b90b-2790-443f-9aa7-32f16828b121 to disappear May 11 22:18:29.789: INFO: Pod pod-cca1b90b-2790-443f-9aa7-32f16828b121 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 22:18:29.789: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9632" for this suite. • [SLOW TEST:9.740 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":269,"skipped":4381,"failed":0} SSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 22:18:29.797: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test env composition May 11 22:18:29.932: INFO: Waiting up to 5m0s for pod "var-expansion-64f855ec-3b62-4e58-b9fc-d7aef368ec90" in namespace "var-expansion-3955" to be "success or failure" May 11 22:18:29.936: INFO: Pod "var-expansion-64f855ec-3b62-4e58-b9fc-d7aef368ec90": Phase="Pending", Reason="", readiness=false. Elapsed: 3.657743ms May 11 22:18:31.993: INFO: Pod "var-expansion-64f855ec-3b62-4e58-b9fc-d7aef368ec90": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060930163s May 11 22:18:34.155: INFO: Pod "var-expansion-64f855ec-3b62-4e58-b9fc-d7aef368ec90": Phase="Pending", Reason="", readiness=false. Elapsed: 4.223099654s May 11 22:18:36.245: INFO: Pod "var-expansion-64f855ec-3b62-4e58-b9fc-d7aef368ec90": Phase="Pending", Reason="", readiness=false. Elapsed: 6.312868102s May 11 22:18:38.249: INFO: Pod "var-expansion-64f855ec-3b62-4e58-b9fc-d7aef368ec90": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.316825468s STEP: Saw pod success May 11 22:18:38.249: INFO: Pod "var-expansion-64f855ec-3b62-4e58-b9fc-d7aef368ec90" satisfied condition "success or failure" May 11 22:18:38.252: INFO: Trying to get logs from node jerma-worker pod var-expansion-64f855ec-3b62-4e58-b9fc-d7aef368ec90 container dapi-container: STEP: delete the pod May 11 22:18:38.754: INFO: Waiting for pod var-expansion-64f855ec-3b62-4e58-b9fc-d7aef368ec90 to disappear May 11 22:18:38.927: INFO: Pod var-expansion-64f855ec-3b62-4e58-b9fc-d7aef368ec90 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 22:18:38.927: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-3955" for this suite. • [SLOW TEST:9.136 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":278,"completed":270,"skipped":4388,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 22:18:38.934: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification May 11 22:18:39.483: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8734 /api/v1/namespaces/watch-8734/configmaps/e2e-watch-test-configmap-a f28d1f8f-a980-4aaf-8de0-7fd07d67d9bf 15374223 0 2020-05-11 22:18:39 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} May 11 22:18:39.483: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8734 /api/v1/namespaces/watch-8734/configmaps/e2e-watch-test-configmap-a f28d1f8f-a980-4aaf-8de0-7fd07d67d9bf 15374223 0 2020-05-11 22:18:39 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification May 11 22:18:49.831: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8734 /api/v1/namespaces/watch-8734/configmaps/e2e-watch-test-configmap-a f28d1f8f-a980-4aaf-8de0-7fd07d67d9bf 15374261 0 2020-05-11 22:18:39 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} May 11 22:18:49.832: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8734 /api/v1/namespaces/watch-8734/configmaps/e2e-watch-test-configmap-a f28d1f8f-a980-4aaf-8de0-7fd07d67d9bf 15374261 0 2020-05-11 22:18:39 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification May 11 22:18:59.840: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8734 /api/v1/namespaces/watch-8734/configmaps/e2e-watch-test-configmap-a f28d1f8f-a980-4aaf-8de0-7fd07d67d9bf 15374289 0 2020-05-11 22:18:39 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 11 22:18:59.840: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8734 /api/v1/namespaces/watch-8734/configmaps/e2e-watch-test-configmap-a f28d1f8f-a980-4aaf-8de0-7fd07d67d9bf 15374289 0 2020-05-11 22:18:39 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification May 11 22:19:09.844: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8734 /api/v1/namespaces/watch-8734/configmaps/e2e-watch-test-configmap-a f28d1f8f-a980-4aaf-8de0-7fd07d67d9bf 15374318 0 2020-05-11 22:18:39 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 11 22:19:09.844: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8734 /api/v1/namespaces/watch-8734/configmaps/e2e-watch-test-configmap-a f28d1f8f-a980-4aaf-8de0-7fd07d67d9bf 15374318 0 2020-05-11 22:18:39 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification May 11 22:19:19.851: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-8734 /api/v1/namespaces/watch-8734/configmaps/e2e-watch-test-configmap-b 569955c4-b4b2-431b-b882-a561b45213f4 15374348 0 2020-05-11 22:19:19 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} May 11 22:19:19.851: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-8734 /api/v1/namespaces/watch-8734/configmaps/e2e-watch-test-configmap-b 569955c4-b4b2-431b-b882-a561b45213f4 15374348 0 2020-05-11 22:19:19 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification May 11 22:19:29.857: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-8734 /api/v1/namespaces/watch-8734/configmaps/e2e-watch-test-configmap-b 569955c4-b4b2-431b-b882-a561b45213f4 15374378 0 2020-05-11 22:19:19 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} May 11 22:19:29.857: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-8734 /api/v1/namespaces/watch-8734/configmaps/e2e-watch-test-configmap-b 569955c4-b4b2-431b-b882-a561b45213f4 15374378 0 2020-05-11 22:19:19 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 22:19:39.857: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-8734" for this suite. • [SLOW TEST:60.958 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":278,"completed":271,"skipped":4429,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 22:19:39.892: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 11 22:19:39.965: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties May 11 22:19:42.947: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1478 create -f -' May 11 22:19:47.414: INFO: stderr: "" May 11 22:19:47.414: INFO: stdout: "e2e-test-crd-publish-openapi-4778-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" May 11 22:19:47.414: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1478 delete e2e-test-crd-publish-openapi-4778-crds test-foo' May 11 22:19:47.579: INFO: stderr: "" May 11 22:19:47.579: INFO: stdout: "e2e-test-crd-publish-openapi-4778-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" May 11 22:19:47.579: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1478 apply -f -' May 11 22:19:47.844: INFO: stderr: "" May 11 22:19:47.844: INFO: stdout: "e2e-test-crd-publish-openapi-4778-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" May 11 22:19:47.844: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1478 delete e2e-test-crd-publish-openapi-4778-crds test-foo' May 11 22:19:47.957: INFO: stderr: "" May 11 22:19:47.957: INFO: stdout: "e2e-test-crd-publish-openapi-4778-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema May 11 22:19:47.957: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1478 create -f -' May 11 22:19:48.199: INFO: rc: 1 May 11 22:19:48.199: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1478 apply -f -' May 11 22:19:48.491: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties May 11 22:19:48.491: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1478 create -f -' May 11 22:19:48.720: INFO: rc: 1 May 11 22:19:48.720: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1478 apply -f -' May 11 22:19:49.004: INFO: rc: 1 STEP: kubectl explain works to explain CR properties May 11 22:19:49.004: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-4778-crds' May 11 22:19:49.295: INFO: stderr: "" May 11 22:19:49.295: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-4778-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively May 11 22:19:49.296: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-4778-crds.metadata' May 11 22:19:49.616: INFO: stderr: "" May 11 22:19:49.616: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-4778-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC. Populated by the system.\n Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested. Populated by the system when a graceful deletion is\n requested. Read-only. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server. If this field is specified and the generated name exists, the\n server will NOT return a 409 - instead, it will either return 201 Created\n or 500 with Reason ServerTimeout indicating a unique name could not be\n found in the time allotted, and the client should retry (optionally after\n the time indicated in the Retry-After header). Applied only if Name is not\n specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty. Must\n be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources. Populated by the system.\n Read-only. Value must be treated as opaque by clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n release and the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations. Populated by the system. Read-only.\n More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" May 11 22:19:49.618: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-4778-crds.spec' May 11 22:19:49.846: INFO: stderr: "" May 11 22:19:49.846: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-4778-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" May 11 22:19:49.846: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-4778-crds.spec.bars' May 11 22:19:50.997: INFO: stderr: "" May 11 22:19:50.997: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-4778-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist May 11 22:19:50.997: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-4778-crds.spec.bars2' May 11 22:19:51.506: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 22:19:54.385: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1478" for this suite. • [SLOW TEST:14.498 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":278,"completed":272,"skipped":4450,"failed":0} S ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 22:19:54.391: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on node default medium May 11 22:19:54.929: INFO: Waiting up to 5m0s for pod "pod-2b22b3b6-0e8c-435d-b71f-5c5fcfdedb6d" in namespace "emptydir-7984" to be "success or failure" May 11 22:19:54.998: INFO: Pod "pod-2b22b3b6-0e8c-435d-b71f-5c5fcfdedb6d": Phase="Pending", Reason="", readiness=false. Elapsed: 69.466539ms May 11 22:19:57.018: INFO: Pod "pod-2b22b3b6-0e8c-435d-b71f-5c5fcfdedb6d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.089114123s May 11 22:19:59.022: INFO: Pod "pod-2b22b3b6-0e8c-435d-b71f-5c5fcfdedb6d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.093371682s May 11 22:20:01.033: INFO: Pod "pod-2b22b3b6-0e8c-435d-b71f-5c5fcfdedb6d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.104107017s STEP: Saw pod success May 11 22:20:01.033: INFO: Pod "pod-2b22b3b6-0e8c-435d-b71f-5c5fcfdedb6d" satisfied condition "success or failure" May 11 22:20:01.036: INFO: Trying to get logs from node jerma-worker pod pod-2b22b3b6-0e8c-435d-b71f-5c5fcfdedb6d container test-container: STEP: delete the pod May 11 22:20:01.056: INFO: Waiting for pod pod-2b22b3b6-0e8c-435d-b71f-5c5fcfdedb6d to disappear May 11 22:20:01.059: INFO: Pod pod-2b22b3b6-0e8c-435d-b71f-5c5fcfdedb6d no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 22:20:01.059: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7984" for this suite. • [SLOW TEST:6.675 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":273,"skipped":4451,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 22:20:01.066: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 11 22:20:01.158: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0f8ce904-f528-4415-89a2-64e9da9e1041" in namespace "projected-1778" to be "success or failure" May 11 22:20:01.647: INFO: Pod "downwardapi-volume-0f8ce904-f528-4415-89a2-64e9da9e1041": Phase="Pending", Reason="", readiness=false. Elapsed: 488.232759ms May 11 22:20:03.650: INFO: Pod "downwardapi-volume-0f8ce904-f528-4415-89a2-64e9da9e1041": Phase="Pending", Reason="", readiness=false. Elapsed: 2.491394487s May 11 22:20:05.653: INFO: Pod "downwardapi-volume-0f8ce904-f528-4415-89a2-64e9da9e1041": Phase="Running", Reason="", readiness=true. Elapsed: 4.49508892s May 11 22:20:07.657: INFO: Pod "downwardapi-volume-0f8ce904-f528-4415-89a2-64e9da9e1041": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.498231382s STEP: Saw pod success May 11 22:20:07.657: INFO: Pod "downwardapi-volume-0f8ce904-f528-4415-89a2-64e9da9e1041" satisfied condition "success or failure" May 11 22:20:07.659: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-0f8ce904-f528-4415-89a2-64e9da9e1041 container client-container: STEP: delete the pod May 11 22:20:07.690: INFO: Waiting for pod downwardapi-volume-0f8ce904-f528-4415-89a2-64e9da9e1041 to disappear May 11 22:20:07.695: INFO: Pod downwardapi-volume-0f8ce904-f528-4415-89a2-64e9da9e1041 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 22:20:07.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1778" for this suite. • [SLOW TEST:6.635 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":274,"skipped":4457,"failed":0} SSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 22:20:07.701: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating Agnhost RC May 11 22:20:07.876: INFO: namespace kubectl-1281 May 11 22:20:07.876: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1281' May 11 22:20:08.270: INFO: stderr: "" May 11 22:20:08.270: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. May 11 22:20:09.273: INFO: Selector matched 1 pods for map[app:agnhost] May 11 22:20:09.273: INFO: Found 0 / 1 May 11 22:20:10.274: INFO: Selector matched 1 pods for map[app:agnhost] May 11 22:20:10.274: INFO: Found 0 / 1 May 11 22:20:11.276: INFO: Selector matched 1 pods for map[app:agnhost] May 11 22:20:11.276: INFO: Found 0 / 1 May 11 22:20:12.273: INFO: Selector matched 1 pods for map[app:agnhost] May 11 22:20:12.273: INFO: Found 1 / 1 May 11 22:20:12.273: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 11 22:20:12.274: INFO: Selector matched 1 pods for map[app:agnhost] May 11 22:20:12.274: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 11 22:20:12.274: INFO: wait on agnhost-master startup in kubectl-1281 May 11 22:20:12.274: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs agnhost-master-fgfzg agnhost-master --namespace=kubectl-1281' May 11 22:20:12.425: INFO: stderr: "" May 11 22:20:12.425: INFO: stdout: "Paused\n" STEP: exposing RC May 11 22:20:12.425: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-1281' May 11 22:20:12.577: INFO: stderr: "" May 11 22:20:12.577: INFO: stdout: "service/rm2 exposed\n" May 11 22:20:12.582: INFO: Service rm2 in namespace kubectl-1281 found. STEP: exposing service May 11 22:20:14.592: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-1281' May 11 22:20:14.733: INFO: stderr: "" May 11 22:20:14.733: INFO: stdout: "service/rm3 exposed\n" May 11 22:20:14.743: INFO: Service rm3 in namespace kubectl-1281 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 22:20:16.748: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1281" for this suite. • [SLOW TEST:9.052 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1188 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":278,"completed":275,"skipped":4464,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 22:20:16.754: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 11 22:20:17.118: INFO: Creating deployment "test-recreate-deployment" May 11 22:20:17.870: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 May 11 22:20:18.057: INFO: deployment "test-recreate-deployment" doesn't have the required revision set May 11 22:20:20.204: INFO: Waiting deployment "test-recreate-deployment" to complete May 11 22:20:20.207: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724832418, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724832418, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724832418, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724832418, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 22:20:22.324: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724832418, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724832418, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724832418, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724832418, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)} May 11 22:20:24.276: INFO: Triggering a new rollout for deployment "test-recreate-deployment" May 11 22:20:24.281: INFO: Updating deployment test-recreate-deployment May 11 22:20:24.281: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 May 11 22:20:26.096: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-1337 /apis/apps/v1/namespaces/deployment-1337/deployments/test-recreate-deployment 7f40c8c7-baee-4274-9f12-97b74ac63b2b 15374694 2 2020-05-11 22:20:17 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004931e78 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-05-11 22:20:24 +0000 UTC,LastTransitionTime:2020-05-11 22:20:24 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-5f94c574ff" is progressing.,LastUpdateTime:2020-05-11 22:20:25 +0000 UTC,LastTransitionTime:2020-05-11 22:20:18 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} May 11 22:20:26.100: INFO: New ReplicaSet "test-recreate-deployment-5f94c574ff" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-5f94c574ff deployment-1337 /apis/apps/v1/namespaces/deployment-1337/replicasets/test-recreate-deployment-5f94c574ff 88574006-fc6e-4eb0-92d4-637626c006f8 15374690 1 2020-05-11 22:20:24 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 7f40c8c7-baee-4274-9f12-97b74ac63b2b 0xc00863bc07 0xc00863bc08}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5f94c574ff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00863bc88 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 11 22:20:26.100: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": May 11 22:20:26.100: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-799c574856 deployment-1337 /apis/apps/v1/namespaces/deployment-1337/replicasets/test-recreate-deployment-799c574856 1bc1efad-59a7-4a1d-ae23-f9780cf4e10f 15374681 2 2020-05-11 22:20:17 +0000 UTC map[name:sample-pod-3 pod-template-hash:799c574856] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 7f40c8c7-baee-4274-9f12-97b74ac63b2b 0xc00863bd07 0xc00863bd08}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 799c574856,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:799c574856] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00863bd78 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 11 22:20:26.189: INFO: Pod "test-recreate-deployment-5f94c574ff-dftjg" is not available: &Pod{ObjectMeta:{test-recreate-deployment-5f94c574ff-dftjg test-recreate-deployment-5f94c574ff- deployment-1337 /api/v1/namespaces/deployment-1337/pods/test-recreate-deployment-5f94c574ff-dftjg da226f28-5df5-48d9-80b0-40202b64d113 15374696 0 2020-05-11 22:20:24 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [{apps/v1 ReplicaSet test-recreate-deployment-5f94c574ff 88574006-fc6e-4eb0-92d4-637626c006f8 0xc004d222f7 0xc004d222f8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lws2j,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lws2j,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lws2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 22:20:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 22:20:25 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 22:20:25 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-11 22:20:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-05-11 22:20:25 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 22:20:26.189: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-1337" for this suite. • [SLOW TEST:10.237 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":276,"skipped":4491,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 22:20:26.991: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-59cb23c2-3ff8-4855-9229-3b9a4069d474 STEP: Creating a pod to test consume configMaps May 11 22:20:27.470: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-5ea79b1c-2c6b-4e34-ac14-e988e66e0bc2" in namespace "projected-2054" to be "success or failure" May 11 22:20:27.492: INFO: Pod "pod-projected-configmaps-5ea79b1c-2c6b-4e34-ac14-e988e66e0bc2": Phase="Pending", Reason="", readiness=false. Elapsed: 21.272554ms May 11 22:20:29.671: INFO: Pod "pod-projected-configmaps-5ea79b1c-2c6b-4e34-ac14-e988e66e0bc2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.201012962s May 11 22:20:31.690: INFO: Pod "pod-projected-configmaps-5ea79b1c-2c6b-4e34-ac14-e988e66e0bc2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.219290726s May 11 22:20:33.743: INFO: Pod "pod-projected-configmaps-5ea79b1c-2c6b-4e34-ac14-e988e66e0bc2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.27269559s STEP: Saw pod success May 11 22:20:33.743: INFO: Pod "pod-projected-configmaps-5ea79b1c-2c6b-4e34-ac14-e988e66e0bc2" satisfied condition "success or failure" May 11 22:20:33.816: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-5ea79b1c-2c6b-4e34-ac14-e988e66e0bc2 container projected-configmap-volume-test: STEP: delete the pod May 11 22:20:34.033: INFO: Waiting for pod pod-projected-configmaps-5ea79b1c-2c6b-4e34-ac14-e988e66e0bc2 to disappear May 11 22:20:34.204: INFO: Pod pod-projected-configmaps-5ea79b1c-2c6b-4e34-ac14-e988e66e0bc2 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 22:20:34.204: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2054" for this suite. • [SLOW TEST:7.274 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":277,"skipped":4517,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 11 22:20:34.266: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods May 11 22:20:35.200: INFO: Pod name wrapped-volume-race-4f24ae5f-8a98-4164-a6ca-8cfa52a5598b: Found 0 pods out of 5 May 11 22:20:40.207: INFO: Pod name wrapped-volume-race-4f24ae5f-8a98-4164-a6ca-8cfa52a5598b: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-4f24ae5f-8a98-4164-a6ca-8cfa52a5598b in namespace emptydir-wrapper-1918, will wait for the garbage collector to delete the pods May 11 22:20:56.316: INFO: Deleting ReplicationController wrapped-volume-race-4f24ae5f-8a98-4164-a6ca-8cfa52a5598b took: 5.790276ms May 11 22:20:56.417: INFO: Terminating ReplicationController wrapped-volume-race-4f24ae5f-8a98-4164-a6ca-8cfa52a5598b pods took: 100.192749ms STEP: Creating RC which spawns configmap-volume pods May 11 22:21:10.022: INFO: Pod name wrapped-volume-race-1e6f0480-87ac-4c8d-b059-96b94823e5f8: Found 0 pods out of 5 May 11 22:21:15.027: INFO: Pod name wrapped-volume-race-1e6f0480-87ac-4c8d-b059-96b94823e5f8: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-1e6f0480-87ac-4c8d-b059-96b94823e5f8 in namespace emptydir-wrapper-1918, will wait for the garbage collector to delete the pods May 11 22:21:33.834: INFO: Deleting ReplicationController wrapped-volume-race-1e6f0480-87ac-4c8d-b059-96b94823e5f8 took: 61.118602ms May 11 22:21:34.534: INFO: Terminating ReplicationController wrapped-volume-race-1e6f0480-87ac-4c8d-b059-96b94823e5f8 pods took: 700.298179ms STEP: Creating RC which spawns configmap-volume pods May 11 22:21:50.510: INFO: Pod name wrapped-volume-race-07775279-8b6c-49db-b38a-455a999dc550: Found 0 pods out of 5 May 11 22:21:55.515: INFO: Pod name wrapped-volume-race-07775279-8b6c-49db-b38a-455a999dc550: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-07775279-8b6c-49db-b38a-455a999dc550 in namespace emptydir-wrapper-1918, will wait for the garbage collector to delete the pods May 11 22:22:09.727: INFO: Deleting ReplicationController wrapped-volume-race-07775279-8b6c-49db-b38a-455a999dc550 took: 34.009505ms May 11 22:22:10.127: INFO: Terminating ReplicationController wrapped-volume-race-07775279-8b6c-49db-b38a-455a999dc550 pods took: 400.243077ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 11 22:22:21.786: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-1918" for this suite. • [SLOW TEST:107.528 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":278,"completed":278,"skipped":4539,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSMay 11 22:22:21.794: INFO: Running AfterSuite actions on all nodes May 11 22:22:21.794: INFO: Running AfterSuite actions on node 1 May 11 22:22:21.794: INFO: Skipping dumping logs from cluster {"msg":"Test Suite completed","total":278,"completed":278,"skipped":4564,"failed":0} Ran 278 of 4842 Specs in 6011.348 seconds SUCCESS! -- 278 Passed | 0 Failed | 0 Pending | 4564 Skipped PASS