I0525 21:09:22.269304 6 test_context.go:419] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0525 21:09:22.269556 6 e2e.go:109] Starting e2e run "9156ce2b-e9d1-4e68-8913-003914938851" on Ginkgo node 1 {"msg":"Test Suite starting","total":278,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1590440961 - Will randomize all specs Will run 278 of 4842 specs May 25 21:09:22.327: INFO: >>> kubeConfig: /root/.kube/config May 25 21:09:22.331: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable May 25 21:09:22.352: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 25 21:09:22.387: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 25 21:09:22.387: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. May 25 21:09:22.387: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start May 25 21:09:22.397: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) May 25 21:09:22.397: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) May 25 21:09:22.397: INFO: e2e test version: v1.17.4 May 25 21:09:22.398: INFO: kube-apiserver version: v1.17.2 May 25 21:09:22.398: INFO: >>> kubeConfig: /root/.kube/config May 25 21:09:22.403: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:09:22.404: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath May 25 21:09:22.488: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-secret-bjll STEP: Creating a pod to test atomic-volume-subpath May 25 21:09:22.575: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-bjll" in namespace "subpath-120" to be "success or failure" May 25 21:09:22.584: INFO: Pod "pod-subpath-test-secret-bjll": Phase="Pending", Reason="", readiness=false. Elapsed: 8.424889ms May 25 21:09:24.664: INFO: Pod "pod-subpath-test-secret-bjll": Phase="Pending", Reason="", readiness=false. Elapsed: 2.088599831s May 25 21:09:26.668: INFO: Pod "pod-subpath-test-secret-bjll": Phase="Running", Reason="", readiness=true. Elapsed: 4.09275238s May 25 21:09:28.672: INFO: Pod "pod-subpath-test-secret-bjll": Phase="Running", Reason="", readiness=true. Elapsed: 6.096828212s May 25 21:09:30.675: INFO: Pod "pod-subpath-test-secret-bjll": Phase="Running", Reason="", readiness=true. Elapsed: 8.099933501s May 25 21:09:32.680: INFO: Pod "pod-subpath-test-secret-bjll": Phase="Running", Reason="", readiness=true. Elapsed: 10.104551283s May 25 21:09:34.683: INFO: Pod "pod-subpath-test-secret-bjll": Phase="Running", Reason="", readiness=true. Elapsed: 12.108140729s May 25 21:09:36.687: INFO: Pod "pod-subpath-test-secret-bjll": Phase="Running", Reason="", readiness=true. Elapsed: 14.111610869s May 25 21:09:38.711: INFO: Pod "pod-subpath-test-secret-bjll": Phase="Running", Reason="", readiness=true. Elapsed: 16.135921179s May 25 21:09:40.716: INFO: Pod "pod-subpath-test-secret-bjll": Phase="Running", Reason="", readiness=true. Elapsed: 18.140396125s May 25 21:09:42.721: INFO: Pod "pod-subpath-test-secret-bjll": Phase="Running", Reason="", readiness=true. Elapsed: 20.145438701s May 25 21:09:44.725: INFO: Pod "pod-subpath-test-secret-bjll": Phase="Running", Reason="", readiness=true. Elapsed: 22.15009884s May 25 21:09:46.730: INFO: Pod "pod-subpath-test-secret-bjll": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.154524668s STEP: Saw pod success May 25 21:09:46.730: INFO: Pod "pod-subpath-test-secret-bjll" satisfied condition "success or failure" May 25 21:09:46.733: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-secret-bjll container test-container-subpath-secret-bjll: STEP: delete the pod May 25 21:09:46.809: INFO: Waiting for pod pod-subpath-test-secret-bjll to disappear May 25 21:09:46.813: INFO: Pod pod-subpath-test-secret-bjll no longer exists STEP: Deleting pod pod-subpath-test-secret-bjll May 25 21:09:46.813: INFO: Deleting pod "pod-subpath-test-secret-bjll" in namespace "subpath-120" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:09:46.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-120" for this suite. • [SLOW TEST:24.418 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":278,"completed":1,"skipped":19,"failed":0} SSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:09:46.823: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap configmap-2274/configmap-test-c03ad370-1c20-47e5-b5a5-2a5de6e0a8cd STEP: Creating a pod to test consume configMaps May 25 21:09:46.904: INFO: Waiting up to 5m0s for pod "pod-configmaps-40a5a644-eb89-4d6d-8f93-63b4021bf36a" in namespace "configmap-2274" to be "success or failure" May 25 21:09:46.956: INFO: Pod "pod-configmaps-40a5a644-eb89-4d6d-8f93-63b4021bf36a": Phase="Pending", Reason="", readiness=false. Elapsed: 52.10631ms May 25 21:09:48.960: INFO: Pod "pod-configmaps-40a5a644-eb89-4d6d-8f93-63b4021bf36a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056184173s May 25 21:09:50.968: INFO: Pod "pod-configmaps-40a5a644-eb89-4d6d-8f93-63b4021bf36a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.064160821s STEP: Saw pod success May 25 21:09:50.968: INFO: Pod "pod-configmaps-40a5a644-eb89-4d6d-8f93-63b4021bf36a" satisfied condition "success or failure" May 25 21:09:50.972: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-40a5a644-eb89-4d6d-8f93-63b4021bf36a container env-test: STEP: delete the pod May 25 21:09:51.013: INFO: Waiting for pod pod-configmaps-40a5a644-eb89-4d6d-8f93-63b4021bf36a to disappear May 25 21:09:51.028: INFO: Pod pod-configmaps-40a5a644-eb89-4d6d-8f93-63b4021bf36a no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:09:51.028: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2274" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":278,"completed":2,"skipped":29,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:09:51.035: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 25 21:09:51.133: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6aea889b-47c1-46ac-9324-332c7c0d65ad" in namespace "downward-api-3742" to be "success or failure" May 25 21:09:51.137: INFO: Pod "downwardapi-volume-6aea889b-47c1-46ac-9324-332c7c0d65ad": Phase="Pending", Reason="", readiness=false. Elapsed: 4.281363ms May 25 21:09:53.172: INFO: Pod "downwardapi-volume-6aea889b-47c1-46ac-9324-332c7c0d65ad": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039390431s May 25 21:09:55.176: INFO: Pod "downwardapi-volume-6aea889b-47c1-46ac-9324-332c7c0d65ad": Phase="Running", Reason="", readiness=true. Elapsed: 4.043660837s May 25 21:09:57.181: INFO: Pod "downwardapi-volume-6aea889b-47c1-46ac-9324-332c7c0d65ad": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.048643781s STEP: Saw pod success May 25 21:09:57.181: INFO: Pod "downwardapi-volume-6aea889b-47c1-46ac-9324-332c7c0d65ad" satisfied condition "success or failure" May 25 21:09:57.185: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-6aea889b-47c1-46ac-9324-332c7c0d65ad container client-container: STEP: delete the pod May 25 21:09:57.221: INFO: Waiting for pod downwardapi-volume-6aea889b-47c1-46ac-9324-332c7c0d65ad to disappear May 25 21:09:57.232: INFO: Pod downwardapi-volume-6aea889b-47c1-46ac-9324-332c7c0d65ad no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:09:57.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3742" for this suite. • [SLOW TEST:6.247 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":3,"skipped":43,"failed":0} [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:09:57.282: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-83bf1ef3-ad8b-4433-b9df-0f6992c032e4 STEP: Creating a pod to test consume configMaps May 25 21:09:57.432: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-4a34d424-8f4f-40dd-ab02-60caf845a492" in namespace "projected-2083" to be "success or failure" May 25 21:09:57.441: INFO: Pod "pod-projected-configmaps-4a34d424-8f4f-40dd-ab02-60caf845a492": Phase="Pending", Reason="", readiness=false. Elapsed: 8.817919ms May 25 21:09:59.445: INFO: Pod "pod-projected-configmaps-4a34d424-8f4f-40dd-ab02-60caf845a492": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012967867s May 25 21:10:01.466: INFO: Pod "pod-projected-configmaps-4a34d424-8f4f-40dd-ab02-60caf845a492": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033484521s STEP: Saw pod success May 25 21:10:01.466: INFO: Pod "pod-projected-configmaps-4a34d424-8f4f-40dd-ab02-60caf845a492" satisfied condition "success or failure" May 25 21:10:01.468: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-4a34d424-8f4f-40dd-ab02-60caf845a492 container projected-configmap-volume-test: STEP: delete the pod May 25 21:10:01.508: INFO: Waiting for pod pod-projected-configmaps-4a34d424-8f4f-40dd-ab02-60caf845a492 to disappear May 25 21:10:01.512: INFO: Pod pod-projected-configmaps-4a34d424-8f4f-40dd-ab02-60caf845a492 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:10:01.512: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2083" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":4,"skipped":43,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:10:01.520: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod May 25 21:10:01.628: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:10:09.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-1087" for this suite. • [SLOW TEST:7.844 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":278,"completed":5,"skipped":72,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:10:09.364: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 May 25 21:10:09.419: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 25 21:10:09.442: INFO: Waiting for terminating namespaces to be deleted... May 25 21:10:09.445: INFO: Logging pods the kubelet thinks is on node jerma-worker before test May 25 21:10:09.450: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 25 21:10:09.450: INFO: Container kindnet-cni ready: true, restart count 0 May 25 21:10:09.451: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 25 21:10:09.451: INFO: Container kube-proxy ready: true, restart count 0 May 25 21:10:09.451: INFO: pod-init-fe946636-c2bd-4618-bfac-478c588c628d from init-container-1087 started at 2020-05-25 21:10:01 +0000 UTC (1 container statuses recorded) May 25 21:10:09.451: INFO: Container run1 ready: true, restart count 0 May 25 21:10:09.451: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test May 25 21:10:09.457: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 25 21:10:09.457: INFO: Container kindnet-cni ready: true, restart count 0 May 25 21:10:09.457: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) May 25 21:10:09.457: INFO: Container kube-bench ready: false, restart count 0 May 25 21:10:09.457: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 25 21:10:09.457: INFO: Container kube-proxy ready: true, restart count 0 May 25 21:10:09.457: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) May 25 21:10:09.457: INFO: Container kube-hunter ready: false, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-5329e66c-e6ca-4b43-be45-5c91b21bc2e2 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-5329e66c-e6ca-4b43-be45-5c91b21bc2e2 off the node jerma-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-5329e66c-e6ca-4b43-be45-5c91b21bc2e2 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:10:17.610: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-1115" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:8.254 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":278,"completed":6,"skipped":108,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:10:17.619: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-configmap-n8ft STEP: Creating a pod to test atomic-volume-subpath May 25 21:10:17.689: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-n8ft" in namespace "subpath-7917" to be "success or failure" May 25 21:10:17.693: INFO: Pod "pod-subpath-test-configmap-n8ft": Phase="Pending", Reason="", readiness=false. Elapsed: 3.932738ms May 25 21:10:19.697: INFO: Pod "pod-subpath-test-configmap-n8ft": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007209474s May 25 21:10:21.701: INFO: Pod "pod-subpath-test-configmap-n8ft": Phase="Running", Reason="", readiness=true. Elapsed: 4.011856772s May 25 21:10:23.704: INFO: Pod "pod-subpath-test-configmap-n8ft": Phase="Running", Reason="", readiness=true. Elapsed: 6.014307285s May 25 21:10:25.708: INFO: Pod "pod-subpath-test-configmap-n8ft": Phase="Running", Reason="", readiness=true. Elapsed: 8.018624483s May 25 21:10:27.712: INFO: Pod "pod-subpath-test-configmap-n8ft": Phase="Running", Reason="", readiness=true. Elapsed: 10.022528455s May 25 21:10:29.716: INFO: Pod "pod-subpath-test-configmap-n8ft": Phase="Running", Reason="", readiness=true. Elapsed: 12.026229161s May 25 21:10:31.720: INFO: Pod "pod-subpath-test-configmap-n8ft": Phase="Running", Reason="", readiness=true. Elapsed: 14.030703988s May 25 21:10:33.724: INFO: Pod "pod-subpath-test-configmap-n8ft": Phase="Running", Reason="", readiness=true. Elapsed: 16.034488115s May 25 21:10:35.728: INFO: Pod "pod-subpath-test-configmap-n8ft": Phase="Running", Reason="", readiness=true. Elapsed: 18.038627802s May 25 21:10:37.732: INFO: Pod "pod-subpath-test-configmap-n8ft": Phase="Running", Reason="", readiness=true. Elapsed: 20.042667581s May 25 21:10:39.736: INFO: Pod "pod-subpath-test-configmap-n8ft": Phase="Running", Reason="", readiness=true. Elapsed: 22.04708035s May 25 21:10:41.741: INFO: Pod "pod-subpath-test-configmap-n8ft": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.051565244s STEP: Saw pod success May 25 21:10:41.741: INFO: Pod "pod-subpath-test-configmap-n8ft" satisfied condition "success or failure" May 25 21:10:41.744: INFO: Trying to get logs from node jerma-worker2 pod pod-subpath-test-configmap-n8ft container test-container-subpath-configmap-n8ft: STEP: delete the pod May 25 21:10:41.761: INFO: Waiting for pod pod-subpath-test-configmap-n8ft to disappear May 25 21:10:41.765: INFO: Pod pod-subpath-test-configmap-n8ft no longer exists STEP: Deleting pod pod-subpath-test-configmap-n8ft May 25 21:10:41.765: INFO: Deleting pod "pod-subpath-test-configmap-n8ft" in namespace "subpath-7917" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:10:41.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-7917" for this suite. • [SLOW TEST:24.154 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":278,"completed":7,"skipped":139,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:10:41.774: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 25 21:10:41.814: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties May 25 21:10:44.735: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6557 create -f -' May 25 21:10:47.983: INFO: stderr: "" May 25 21:10:47.983: INFO: stdout: "e2e-test-crd-publish-openapi-3265-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" May 25 21:10:47.983: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6557 delete e2e-test-crd-publish-openapi-3265-crds test-cr' May 25 21:10:48.089: INFO: stderr: "" May 25 21:10:48.089: INFO: stdout: "e2e-test-crd-publish-openapi-3265-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" May 25 21:10:48.089: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6557 apply -f -' May 25 21:10:48.344: INFO: stderr: "" May 25 21:10:48.344: INFO: stdout: "e2e-test-crd-publish-openapi-3265-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" May 25 21:10:48.344: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6557 delete e2e-test-crd-publish-openapi-3265-crds test-cr' May 25 21:10:48.459: INFO: stderr: "" May 25 21:10:48.459: INFO: stdout: "e2e-test-crd-publish-openapi-3265-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema May 25 21:10:48.459: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-3265-crds' May 25 21:10:48.710: INFO: stderr: "" May 25 21:10:48.710: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-3265-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:10:50.590: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6557" for this suite. • [SLOW TEST:8.822 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":278,"completed":8,"skipped":146,"failed":0} SSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:10:50.597: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service multi-endpoint-test in namespace services-2263 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2263 to expose endpoints map[] May 25 21:10:50.838: INFO: Get endpoints failed (15.880078ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found May 25 21:10:51.843: INFO: successfully validated that service multi-endpoint-test in namespace services-2263 exposes endpoints map[] (1.020525027s elapsed) STEP: Creating pod pod1 in namespace services-2263 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2263 to expose endpoints map[pod1:[100]] May 25 21:10:55.988: INFO: successfully validated that service multi-endpoint-test in namespace services-2263 exposes endpoints map[pod1:[100]] (4.137962768s elapsed) STEP: Creating pod pod2 in namespace services-2263 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2263 to expose endpoints map[pod1:[100] pod2:[101]] May 25 21:11:00.119: INFO: successfully validated that service multi-endpoint-test in namespace services-2263 exposes endpoints map[pod1:[100] pod2:[101]] (4.126961359s elapsed) STEP: Deleting pod pod1 in namespace services-2263 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2263 to expose endpoints map[pod2:[101]] May 25 21:11:01.169: INFO: successfully validated that service multi-endpoint-test in namespace services-2263 exposes endpoints map[pod2:[101]] (1.045470495s elapsed) STEP: Deleting pod pod2 in namespace services-2263 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2263 to expose endpoints map[] May 25 21:11:02.196: INFO: successfully validated that service multi-endpoint-test in namespace services-2263 exposes endpoints map[] (1.019634578s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:11:02.306: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2263" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:11.722 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":278,"completed":9,"skipped":153,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:11:02.319: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change May 25 21:11:02.403: INFO: Pod name pod-release: Found 0 pods out of 1 May 25 21:11:07.412: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:11:08.464: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-7371" for this suite. • [SLOW TEST:6.153 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":278,"completed":10,"skipped":177,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:11:08.473: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 25 21:11:08.585: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3907' May 25 21:11:08.951: INFO: stderr: "" May 25 21:11:08.951: INFO: stdout: "replicationcontroller/agnhost-master created\n" May 25 21:11:08.951: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3907' May 25 21:11:10.097: INFO: stderr: "" May 25 21:11:10.097: INFO: stdout: "service/agnhost-master created\n" STEP: Waiting for Agnhost master to start. May 25 21:11:11.102: INFO: Selector matched 1 pods for map[app:agnhost] May 25 21:11:11.102: INFO: Found 0 / 1 May 25 21:11:12.101: INFO: Selector matched 1 pods for map[app:agnhost] May 25 21:11:12.101: INFO: Found 0 / 1 May 25 21:11:13.102: INFO: Selector matched 1 pods for map[app:agnhost] May 25 21:11:13.102: INFO: Found 1 / 1 May 25 21:11:13.102: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 25 21:11:13.106: INFO: Selector matched 1 pods for map[app:agnhost] May 25 21:11:13.106: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 25 21:11:13.106: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod agnhost-master-mc8xx --namespace=kubectl-3907' May 25 21:11:13.227: INFO: stderr: "" May 25 21:11:13.227: INFO: stdout: "Name: agnhost-master-mc8xx\nNamespace: kubectl-3907\nPriority: 0\nNode: jerma-worker2/172.17.0.8\nStart Time: Mon, 25 May 2020 21:11:09 +0000\nLabels: app=agnhost\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.2.79\nIPs:\n IP: 10.244.2.79\nControlled By: ReplicationController/agnhost-master\nContainers:\n agnhost-master:\n Container ID: containerd://5be84788a9c071e94e8f169817473f87b1ba0dca409c610da719ec48141aa1b6\n Image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n Image ID: gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Mon, 25 May 2020 21:11:11 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-g7q6b (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-g7q6b:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-g7q6b\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled default-scheduler Successfully assigned kubectl-3907/agnhost-master-mc8xx to jerma-worker2\n Normal Pulled 3s kubelet, jerma-worker2 Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\n Normal Created 2s kubelet, jerma-worker2 Created container agnhost-master\n Normal Started 2s kubelet, jerma-worker2 Started container agnhost-master\n" May 25 21:11:13.227: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-3907' May 25 21:11:13.351: INFO: stderr: "" May 25 21:11:13.351: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-3907\nSelector: app=agnhost,role=master\nLabels: app=agnhost\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=master\n Containers:\n agnhost-master:\n Image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 5s replication-controller Created pod: agnhost-master-mc8xx\n" May 25 21:11:13.351: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-3907' May 25 21:11:13.470: INFO: stderr: "" May 25 21:11:13.470: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-3907\nLabels: app=agnhost\n role=master\nAnnotations: \nSelector: app=agnhost,role=master\nType: ClusterIP\nIP: 10.102.89.215\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.244.2.79:6379\nSession Affinity: None\nEvents: \n" May 25 21:11:13.473: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node jerma-control-plane' May 25 21:11:13.623: INFO: stderr: "" May 25 21:11:13.623: INFO: stdout: "Name: jerma-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=jerma-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 15 Mar 2020 18:25:55 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: jerma-control-plane\n AcquireTime: \n RenewTime: Mon, 25 May 2020 21:11:10 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Mon, 25 May 2020 21:08:26 +0000 Sun, 15 Mar 2020 18:25:55 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Mon, 25 May 2020 21:08:26 +0000 Sun, 15 Mar 2020 18:25:55 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Mon, 25 May 2020 21:08:26 +0000 Sun, 15 Mar 2020 18:25:55 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Mon, 25 May 2020 21:08:26 +0000 Sun, 15 Mar 2020 18:26:27 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.9\n Hostname: jerma-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nSystem Info:\n Machine ID: 3bcfb16fe77247d3af07bed975350d5c\n System UUID: 947a2db5-5527-4203-8af5-13d97ffe8a80\n Boot ID: ca2aa731-f890-4956-92a1-ff8c7560d571\n Kernel Version: 4.15.0-88-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.2-31-gaa877d78\n Kubelet Version: v1.17.2\n Kube-Proxy Version: v1.17.2\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nNon-terminated Pods: (9 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system coredns-6955765f44-rll5s 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 71d\n kube-system coredns-6955765f44-svxk5 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 71d\n kube-system etcd-jerma-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 71d\n kube-system kindnet-bjddj 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 71d\n kube-system kube-apiserver-jerma-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 71d\n kube-system kube-controller-manager-jerma-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 71d\n kube-system kube-proxy-mm9zd 0 (0%) 0 (0%) 0 (0%) 0 (0%) 71d\n kube-system kube-scheduler-jerma-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 71d\n local-path-storage local-path-provisioner-85445b74d4-7mg5w 0 (0%) 0 (0%) 0 (0%) 0 (0%) 71d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 850m (5%) 100m (0%)\n memory 190Mi (0%) 390Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" May 25 21:11:13.623: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-3907' May 25 21:11:14.105: INFO: stderr: "" May 25 21:11:14.105: INFO: stdout: "Name: kubectl-3907\nLabels: e2e-framework=kubectl\n e2e-run=9156ce2b-e9d1-4e68-8913-003914938851\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:11:14.105: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3907" for this suite. • [SLOW TEST:5.638 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1047 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":278,"completed":11,"skipped":194,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:11:14.112: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation May 25 21:11:14.392: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation May 25 21:11:25.927: INFO: >>> kubeConfig: /root/.kube/config May 25 21:11:28.898: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:11:38.399: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1837" for this suite. • [SLOW TEST:24.294 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":278,"completed":12,"skipped":221,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:11:38.406: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-055940a0-1d74-4712-8f70-32fbbbd83a35 STEP: Creating a pod to test consume secrets May 25 21:11:38.539: INFO: Waiting up to 5m0s for pod "pod-secrets-5dbc1a96-879b-4fba-8603-f7884bf0924d" in namespace "secrets-7453" to be "success or failure" May 25 21:11:38.546: INFO: Pod "pod-secrets-5dbc1a96-879b-4fba-8603-f7884bf0924d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.519821ms May 25 21:11:40.550: INFO: Pod "pod-secrets-5dbc1a96-879b-4fba-8603-f7884bf0924d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01049947s May 25 21:11:42.555: INFO: Pod "pod-secrets-5dbc1a96-879b-4fba-8603-f7884bf0924d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015603422s STEP: Saw pod success May 25 21:11:42.555: INFO: Pod "pod-secrets-5dbc1a96-879b-4fba-8603-f7884bf0924d" satisfied condition "success or failure" May 25 21:11:42.558: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-5dbc1a96-879b-4fba-8603-f7884bf0924d container secret-volume-test: STEP: delete the pod May 25 21:11:42.596: INFO: Waiting for pod pod-secrets-5dbc1a96-879b-4fba-8603-f7884bf0924d to disappear May 25 21:11:42.721: INFO: Pod pod-secrets-5dbc1a96-879b-4fba-8603-f7884bf0924d no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:11:42.721: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7453" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":13,"skipped":258,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:11:42.730: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod May 25 21:11:47.587: INFO: Successfully updated pod "pod-update-aa7fcd02-0e2e-47a3-bb53-e82309c1cce8" STEP: verifying the updated pod is in kubernetes May 25 21:11:47.592: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:11:47.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5857" for this suite. •{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":278,"completed":14,"skipped":285,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:11:47.598: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override arguments May 25 21:11:47.693: INFO: Waiting up to 5m0s for pod "client-containers-160f741e-b108-4af6-9157-3b1b5bb4d3dd" in namespace "containers-2841" to be "success or failure" May 25 21:11:47.696: INFO: Pod "client-containers-160f741e-b108-4af6-9157-3b1b5bb4d3dd": Phase="Pending", Reason="", readiness=false. Elapsed: 3.488056ms May 25 21:11:49.701: INFO: Pod "client-containers-160f741e-b108-4af6-9157-3b1b5bb4d3dd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00817689s May 25 21:11:51.706: INFO: Pod "client-containers-160f741e-b108-4af6-9157-3b1b5bb4d3dd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012867061s STEP: Saw pod success May 25 21:11:51.706: INFO: Pod "client-containers-160f741e-b108-4af6-9157-3b1b5bb4d3dd" satisfied condition "success or failure" May 25 21:11:51.709: INFO: Trying to get logs from node jerma-worker pod client-containers-160f741e-b108-4af6-9157-3b1b5bb4d3dd container test-container: STEP: delete the pod May 25 21:11:51.746: INFO: Waiting for pod client-containers-160f741e-b108-4af6-9157-3b1b5bb4d3dd to disappear May 25 21:11:51.750: INFO: Pod client-containers-160f741e-b108-4af6-9157-3b1b5bb4d3dd no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:11:51.750: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-2841" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":278,"completed":15,"skipped":310,"failed":0} SSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:11:51.757: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:11:55.924: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-5086" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":278,"completed":16,"skipped":314,"failed":0} SSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:11:55.960: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 25 21:11:56.218: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota May 25 21:11:58.368: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:11:58.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-8463" for this suite. •{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":278,"completed":17,"skipped":325,"failed":0} SSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:11:58.474: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-e1512c73-f89d-48d1-8629-aa9c9daf0dd1 STEP: Creating a pod to test consume configMaps May 25 21:11:58.617: INFO: Waiting up to 5m0s for pod "pod-configmaps-46451278-f6dd-4276-83a7-a21ef4f33710" in namespace "configmap-9700" to be "success or failure" May 25 21:11:58.655: INFO: Pod "pod-configmaps-46451278-f6dd-4276-83a7-a21ef4f33710": Phase="Pending", Reason="", readiness=false. Elapsed: 37.660497ms May 25 21:12:00.659: INFO: Pod "pod-configmaps-46451278-f6dd-4276-83a7-a21ef4f33710": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041915459s May 25 21:12:02.678: INFO: Pod "pod-configmaps-46451278-f6dd-4276-83a7-a21ef4f33710": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.060871284s STEP: Saw pod success May 25 21:12:02.678: INFO: Pod "pod-configmaps-46451278-f6dd-4276-83a7-a21ef4f33710" satisfied condition "success or failure" May 25 21:12:02.680: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-46451278-f6dd-4276-83a7-a21ef4f33710 container configmap-volume-test: STEP: delete the pod May 25 21:12:02.716: INFO: Waiting for pod pod-configmaps-46451278-f6dd-4276-83a7-a21ef4f33710 to disappear May 25 21:12:02.720: INFO: Pod pod-configmaps-46451278-f6dd-4276-83a7-a21ef4f33710 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:12:02.720: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9700" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":18,"skipped":331,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:12:02.730: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod busybox-4401c361-c025-4267-8e2c-cebf4918fb82 in namespace container-probe-119 May 25 21:12:06.797: INFO: Started pod busybox-4401c361-c025-4267-8e2c-cebf4918fb82 in namespace container-probe-119 STEP: checking the pod's current state and verifying that restartCount is present May 25 21:12:06.801: INFO: Initial restart count of pod busybox-4401c361-c025-4267-8e2c-cebf4918fb82 is 0 May 25 21:12:58.980: INFO: Restart count of pod container-probe-119/busybox-4401c361-c025-4267-8e2c-cebf4918fb82 is now 1 (52.179091503s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:12:58.992: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-119" for this suite. • [SLOW TEST:56.315 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":19,"skipped":383,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:12:59.045: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test hostPath mode May 25 21:12:59.095: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-6786" to be "success or failure" May 25 21:12:59.099: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 3.783812ms May 25 21:13:01.103: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008075027s May 25 21:13:03.107: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01174289s May 25 21:13:05.111: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.016043436s STEP: Saw pod success May 25 21:13:05.111: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" May 25 21:13:05.114: INFO: Trying to get logs from node jerma-worker pod pod-host-path-test container test-container-1: STEP: delete the pod May 25 21:13:05.309: INFO: Waiting for pod pod-host-path-test to disappear May 25 21:13:05.319: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:13:05.319: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-6786" for this suite. • [SLOW TEST:6.283 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":20,"skipped":406,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:13:05.329: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:13:05.744: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1336" for this suite. •{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":278,"completed":21,"skipped":444,"failed":0} SSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:13:05.751: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating secret secrets-4389/secret-test-0d432923-e9e0-48d0-9dd2-22787fff6094 STEP: Creating a pod to test consume secrets May 25 21:13:05.812: INFO: Waiting up to 5m0s for pod "pod-configmaps-fea21139-a5db-4872-83df-8e5cf2bb3898" in namespace "secrets-4389" to be "success or failure" May 25 21:13:05.816: INFO: Pod "pod-configmaps-fea21139-a5db-4872-83df-8e5cf2bb3898": Phase="Pending", Reason="", readiness=false. Elapsed: 3.901895ms May 25 21:13:07.822: INFO: Pod "pod-configmaps-fea21139-a5db-4872-83df-8e5cf2bb3898": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009094072s May 25 21:13:09.930: INFO: Pod "pod-configmaps-fea21139-a5db-4872-83df-8e5cf2bb3898": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.117847603s STEP: Saw pod success May 25 21:13:09.930: INFO: Pod "pod-configmaps-fea21139-a5db-4872-83df-8e5cf2bb3898" satisfied condition "success or failure" May 25 21:13:09.934: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-fea21139-a5db-4872-83df-8e5cf2bb3898 container env-test: STEP: delete the pod May 25 21:13:10.055: INFO: Waiting for pod pod-configmaps-fea21139-a5db-4872-83df-8e5cf2bb3898 to disappear May 25 21:13:10.058: INFO: Pod pod-configmaps-fea21139-a5db-4872-83df-8e5cf2bb3898 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:13:10.058: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4389" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":22,"skipped":449,"failed":0} ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:13:10.094: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 25 21:13:10.149: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9d2d923c-9709-46ca-a27a-2a8aa484a4e6" in namespace "projected-2568" to be "success or failure" May 25 21:13:10.193: INFO: Pod "downwardapi-volume-9d2d923c-9709-46ca-a27a-2a8aa484a4e6": Phase="Pending", Reason="", readiness=false. Elapsed: 44.607571ms May 25 21:13:12.259: INFO: Pod "downwardapi-volume-9d2d923c-9709-46ca-a27a-2a8aa484a4e6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.11022339s May 25 21:13:14.298: INFO: Pod "downwardapi-volume-9d2d923c-9709-46ca-a27a-2a8aa484a4e6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.149282049s STEP: Saw pod success May 25 21:13:14.298: INFO: Pod "downwardapi-volume-9d2d923c-9709-46ca-a27a-2a8aa484a4e6" satisfied condition "success or failure" May 25 21:13:14.300: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-9d2d923c-9709-46ca-a27a-2a8aa484a4e6 container client-container: STEP: delete the pod May 25 21:13:14.376: INFO: Waiting for pod downwardapi-volume-9d2d923c-9709-46ca-a27a-2a8aa484a4e6 to disappear May 25 21:13:14.381: INFO: Pod downwardapi-volume-9d2d923c-9709-46ca-a27a-2a8aa484a4e6 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:13:14.381: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2568" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":23,"skipped":449,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:13:14.387: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test substitution in container's command May 25 21:13:14.436: INFO: Waiting up to 5m0s for pod "var-expansion-eb2e5818-b158-440c-9000-ed8efdd8ae90" in namespace "var-expansion-3282" to be "success or failure" May 25 21:13:14.454: INFO: Pod "var-expansion-eb2e5818-b158-440c-9000-ed8efdd8ae90": Phase="Pending", Reason="", readiness=false. Elapsed: 17.945207ms May 25 21:13:16.535: INFO: Pod "var-expansion-eb2e5818-b158-440c-9000-ed8efdd8ae90": Phase="Pending", Reason="", readiness=false. Elapsed: 2.098313322s May 25 21:13:18.540: INFO: Pod "var-expansion-eb2e5818-b158-440c-9000-ed8efdd8ae90": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.103173002s STEP: Saw pod success May 25 21:13:18.540: INFO: Pod "var-expansion-eb2e5818-b158-440c-9000-ed8efdd8ae90" satisfied condition "success or failure" May 25 21:13:18.543: INFO: Trying to get logs from node jerma-worker pod var-expansion-eb2e5818-b158-440c-9000-ed8efdd8ae90 container dapi-container: STEP: delete the pod May 25 21:13:18.582: INFO: Waiting for pod var-expansion-eb2e5818-b158-440c-9000-ed8efdd8ae90 to disappear May 25 21:13:18.585: INFO: Pod var-expansion-eb2e5818-b158-440c-9000-ed8efdd8ae90 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:13:18.585: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-3282" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":278,"completed":24,"skipped":467,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:13:18.591: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 25 21:13:18.659: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d9893535-5c8e-4eca-87e5-87f8111a8cf9" in namespace "projected-3676" to be "success or failure" May 25 21:13:18.662: INFO: Pod "downwardapi-volume-d9893535-5c8e-4eca-87e5-87f8111a8cf9": Phase="Pending", Reason="", readiness=false. Elapsed: 3.037286ms May 25 21:13:20.666: INFO: Pod "downwardapi-volume-d9893535-5c8e-4eca-87e5-87f8111a8cf9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006593277s May 25 21:13:22.672: INFO: Pod "downwardapi-volume-d9893535-5c8e-4eca-87e5-87f8111a8cf9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012683454s STEP: Saw pod success May 25 21:13:22.672: INFO: Pod "downwardapi-volume-d9893535-5c8e-4eca-87e5-87f8111a8cf9" satisfied condition "success or failure" May 25 21:13:22.675: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-d9893535-5c8e-4eca-87e5-87f8111a8cf9 container client-container: STEP: delete the pod May 25 21:13:22.800: INFO: Waiting for pod downwardapi-volume-d9893535-5c8e-4eca-87e5-87f8111a8cf9 to disappear May 25 21:13:22.823: INFO: Pod downwardapi-volume-d9893535-5c8e-4eca-87e5-87f8111a8cf9 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:13:22.823: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3676" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":25,"skipped":533,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:13:22.832: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod test-webserver-65897f69-969c-4487-8291-f1ba7e4fa231 in namespace container-probe-7458 May 25 21:13:27.020: INFO: Started pod test-webserver-65897f69-969c-4487-8291-f1ba7e4fa231 in namespace container-probe-7458 STEP: checking the pod's current state and verifying that restartCount is present May 25 21:13:27.023: INFO: Initial restart count of pod test-webserver-65897f69-969c-4487-8291-f1ba7e4fa231 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:17:27.626: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7458" for this suite. • [SLOW TEST:244.807 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":26,"skipped":546,"failed":0} SSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:17:27.639: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-fvxcc in namespace proxy-3715 I0525 21:17:28.144218 6 runners.go:189] Created replication controller with name: proxy-service-fvxcc, namespace: proxy-3715, replica count: 1 I0525 21:17:29.194759 6 runners.go:189] proxy-service-fvxcc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0525 21:17:30.194993 6 runners.go:189] proxy-service-fvxcc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0525 21:17:31.195231 6 runners.go:189] proxy-service-fvxcc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0525 21:17:32.195479 6 runners.go:189] proxy-service-fvxcc Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0525 21:17:33.195769 6 runners.go:189] proxy-service-fvxcc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 25 21:17:33.199: INFO: setup took 5.293226064s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts May 25 21:17:33.208: INFO: (0) /api/v1/namespaces/proxy-3715/pods/http:proxy-service-fvxcc-zqzhz:160/proxy/: foo (200; 8.777089ms) May 25 21:17:33.208: INFO: (0) /api/v1/namespaces/proxy-3715/pods/proxy-service-fvxcc-zqzhz:160/proxy/: foo (200; 8.961343ms) May 25 21:17:33.208: INFO: (0) /api/v1/namespaces/proxy-3715/pods/proxy-service-fvxcc-zqzhz:1080/proxy/: test<... (200; 8.979946ms) May 25 21:17:33.208: INFO: (0) /api/v1/namespaces/proxy-3715/pods/http:proxy-service-fvxcc-zqzhz:1080/proxy/: ... (200; 8.962024ms) May 25 21:17:33.208: INFO: (0) /api/v1/namespaces/proxy-3715/pods/proxy-service-fvxcc-zqzhz:162/proxy/: bar (200; 9.152523ms) May 25 21:17:33.208: INFO: (0) /api/v1/namespaces/proxy-3715/pods/http:proxy-service-fvxcc-zqzhz:162/proxy/: bar (200; 9.343337ms) May 25 21:17:33.209: INFO: (0) /api/v1/namespaces/proxy-3715/pods/proxy-service-fvxcc-zqzhz/proxy/: test (200; 9.396416ms) May 25 21:17:33.210: INFO: (0) /api/v1/namespaces/proxy-3715/services/http:proxy-service-fvxcc:portname1/proxy/: foo (200; 11.253956ms) May 25 21:17:33.210: INFO: (0) /api/v1/namespaces/proxy-3715/services/proxy-service-fvxcc:portname2/proxy/: bar (200; 11.386677ms) May 25 21:17:33.211: INFO: (0) /api/v1/namespaces/proxy-3715/services/proxy-service-fvxcc:portname1/proxy/: foo (200; 11.558619ms) May 25 21:17:33.219: INFO: (0) /api/v1/namespaces/proxy-3715/services/http:proxy-service-fvxcc:portname2/proxy/: bar (200; 20.282371ms) May 25 21:17:33.261: INFO: (0) /api/v1/namespaces/proxy-3715/pods/https:proxy-service-fvxcc-zqzhz:462/proxy/: tls qux (200; 61.843109ms) May 25 21:17:33.302: INFO: (0) /api/v1/namespaces/proxy-3715/services/https:proxy-service-fvxcc:tlsportname2/proxy/: tls qux (200; 103.102752ms) May 25 21:17:33.306: INFO: (0) /api/v1/namespaces/proxy-3715/pods/https:proxy-service-fvxcc-zqzhz:443/proxy/: ... (200; 4.736442ms) May 25 21:17:33.315: INFO: (1) /api/v1/namespaces/proxy-3715/pods/proxy-service-fvxcc-zqzhz/proxy/: test (200; 4.909445ms) May 25 21:17:33.315: INFO: (1) /api/v1/namespaces/proxy-3715/pods/proxy-service-fvxcc-zqzhz:160/proxy/: foo (200; 5.258427ms) May 25 21:17:33.316: INFO: (1) /api/v1/namespaces/proxy-3715/pods/proxy-service-fvxcc-zqzhz:1080/proxy/: test<... (200; 5.464706ms) May 25 21:17:33.316: INFO: (1) /api/v1/namespaces/proxy-3715/services/http:proxy-service-fvxcc:portname1/proxy/: foo (200; 5.602672ms) May 25 21:17:33.316: INFO: (1) /api/v1/namespaces/proxy-3715/services/proxy-service-fvxcc:portname2/proxy/: bar (200; 5.885765ms) May 25 21:17:33.316: INFO: (1) /api/v1/namespaces/proxy-3715/services/https:proxy-service-fvxcc:tlsportname2/proxy/: tls qux (200; 5.857524ms) May 25 21:17:33.316: INFO: (1) /api/v1/namespaces/proxy-3715/services/https:proxy-service-fvxcc:tlsportname1/proxy/: tls baz (200; 5.894957ms) May 25 21:17:33.316: INFO: (1) /api/v1/namespaces/proxy-3715/pods/http:proxy-service-fvxcc-zqzhz:160/proxy/: foo (200; 5.844244ms) May 25 21:17:33.316: INFO: (1) /api/v1/namespaces/proxy-3715/services/proxy-service-fvxcc:portname1/proxy/: foo (200; 6.25414ms) May 25 21:17:33.316: INFO: (1) /api/v1/namespaces/proxy-3715/services/http:proxy-service-fvxcc:portname2/proxy/: bar (200; 6.234551ms) May 25 21:17:33.316: INFO: (1) /api/v1/namespaces/proxy-3715/pods/https:proxy-service-fvxcc-zqzhz:443/proxy/: test (200; 3.9298ms) May 25 21:17:33.321: INFO: (2) /api/v1/namespaces/proxy-3715/pods/http:proxy-service-fvxcc-zqzhz:162/proxy/: bar (200; 4.365443ms) May 25 21:17:33.321: INFO: (2) /api/v1/namespaces/proxy-3715/pods/proxy-service-fvxcc-zqzhz:1080/proxy/: test<... (200; 4.588435ms) May 25 21:17:33.321: INFO: (2) /api/v1/namespaces/proxy-3715/pods/proxy-service-fvxcc-zqzhz:162/proxy/: bar (200; 4.655912ms) May 25 21:17:33.322: INFO: (2) /api/v1/namespaces/proxy-3715/pods/https:proxy-service-fvxcc-zqzhz:462/proxy/: tls qux (200; 4.953254ms) May 25 21:17:33.322: INFO: (2) /api/v1/namespaces/proxy-3715/services/https:proxy-service-fvxcc:tlsportname1/proxy/: tls baz (200; 4.975693ms) May 25 21:17:33.322: INFO: (2) /api/v1/namespaces/proxy-3715/services/proxy-service-fvxcc:portname2/proxy/: bar (200; 4.982018ms) May 25 21:17:33.322: INFO: (2) /api/v1/namespaces/proxy-3715/services/http:proxy-service-fvxcc:portname2/proxy/: bar (200; 5.188919ms) May 25 21:17:33.322: INFO: (2) /api/v1/namespaces/proxy-3715/services/http:proxy-service-fvxcc:portname1/proxy/: foo (200; 5.617992ms) May 25 21:17:33.322: INFO: (2) /api/v1/namespaces/proxy-3715/pods/http:proxy-service-fvxcc-zqzhz:160/proxy/: foo (200; 5.836856ms) May 25 21:17:33.322: INFO: (2) /api/v1/namespaces/proxy-3715/pods/http:proxy-service-fvxcc-zqzhz:1080/proxy/: ... (200; 5.810391ms) May 25 21:17:33.322: INFO: (2) /api/v1/namespaces/proxy-3715/pods/https:proxy-service-fvxcc-zqzhz:460/proxy/: tls baz (200; 5.816479ms) May 25 21:17:33.322: INFO: (2) /api/v1/namespaces/proxy-3715/services/https:proxy-service-fvxcc:tlsportname2/proxy/: tls qux (200; 5.892325ms) May 25 21:17:33.322: INFO: (2) /api/v1/namespaces/proxy-3715/services/proxy-service-fvxcc:portname1/proxy/: foo (200; 5.91933ms) May 25 21:17:33.322: INFO: (2) /api/v1/namespaces/proxy-3715/pods/proxy-service-fvxcc-zqzhz:160/proxy/: foo (200; 5.724204ms) May 25 21:17:33.323: INFO: (2) /api/v1/namespaces/proxy-3715/pods/https:proxy-service-fvxcc-zqzhz:443/proxy/: test<... (200; 2.686867ms) May 25 21:17:33.326: INFO: (3) /api/v1/namespaces/proxy-3715/pods/http:proxy-service-fvxcc-zqzhz:1080/proxy/: ... (200; 3.310235ms) May 25 21:17:33.326: INFO: (3) /api/v1/namespaces/proxy-3715/pods/http:proxy-service-fvxcc-zqzhz:162/proxy/: bar (200; 3.33111ms) May 25 21:17:33.326: INFO: (3) /api/v1/namespaces/proxy-3715/pods/proxy-service-fvxcc-zqzhz:162/proxy/: bar (200; 3.284779ms) May 25 21:17:33.326: INFO: (3) /api/v1/namespaces/proxy-3715/pods/https:proxy-service-fvxcc-zqzhz:462/proxy/: tls qux (200; 3.380593ms) May 25 21:17:33.326: INFO: (3) /api/v1/namespaces/proxy-3715/pods/proxy-service-fvxcc-zqzhz/proxy/: test (200; 3.405006ms) May 25 21:17:33.327: INFO: (3) /api/v1/namespaces/proxy-3715/pods/https:proxy-service-fvxcc-zqzhz:443/proxy/: test<... (200; 4.527852ms) May 25 21:17:33.332: INFO: (4) /api/v1/namespaces/proxy-3715/pods/proxy-service-fvxcc-zqzhz:160/proxy/: foo (200; 4.534901ms) May 25 21:17:33.333: INFO: (4) /api/v1/namespaces/proxy-3715/pods/http:proxy-service-fvxcc-zqzhz:1080/proxy/: ... (200; 4.957516ms) May 25 21:17:33.333: INFO: (4) /api/v1/namespaces/proxy-3715/pods/https:proxy-service-fvxcc-zqzhz:443/proxy/: test (200; 5.111708ms) May 25 21:17:33.334: INFO: (4) /api/v1/namespaces/proxy-3715/pods/http:proxy-service-fvxcc-zqzhz:162/proxy/: bar (200; 6.132222ms) May 25 21:17:33.335: INFO: (4) /api/v1/namespaces/proxy-3715/services/proxy-service-fvxcc:portname1/proxy/: foo (200; 6.919828ms) May 25 21:17:33.335: INFO: (4) /api/v1/namespaces/proxy-3715/services/http:proxy-service-fvxcc:portname2/proxy/: bar (200; 7.126084ms) May 25 21:17:33.335: INFO: (4) /api/v1/namespaces/proxy-3715/services/http:proxy-service-fvxcc:portname1/proxy/: foo (200; 7.16158ms) May 25 21:17:33.335: INFO: (4) /api/v1/namespaces/proxy-3715/services/https:proxy-service-fvxcc:tlsportname2/proxy/: tls qux (200; 7.228576ms) May 25 21:17:33.335: INFO: (4) /api/v1/namespaces/proxy-3715/services/https:proxy-service-fvxcc:tlsportname1/proxy/: tls baz (200; 7.2582ms) May 25 21:17:33.335: INFO: (4) /api/v1/namespaces/proxy-3715/services/proxy-service-fvxcc:portname2/proxy/: bar (200; 7.325177ms) May 25 21:17:33.339: INFO: (5) /api/v1/namespaces/proxy-3715/pods/https:proxy-service-fvxcc-zqzhz:462/proxy/: tls qux (200; 4.119373ms) May 25 21:17:33.340: INFO: (5) /api/v1/namespaces/proxy-3715/pods/proxy-service-fvxcc-zqzhz:1080/proxy/: test<... (200; 4.40117ms) May 25 21:17:33.340: INFO: (5) /api/v1/namespaces/proxy-3715/pods/http:proxy-service-fvxcc-zqzhz:160/proxy/: foo (200; 4.450715ms) May 25 21:17:33.340: INFO: (5) /api/v1/namespaces/proxy-3715/pods/proxy-service-fvxcc-zqzhz/proxy/: test (200; 4.511523ms) May 25 21:17:33.340: INFO: (5) /api/v1/namespaces/proxy-3715/pods/proxy-service-fvxcc-zqzhz:160/proxy/: foo (200; 4.503524ms) May 25 21:17:33.340: INFO: (5) /api/v1/namespaces/proxy-3715/pods/https:proxy-service-fvxcc-zqzhz:460/proxy/: tls baz (200; 4.647179ms) May 25 21:17:33.340: INFO: (5) /api/v1/namespaces/proxy-3715/pods/http:proxy-service-fvxcc-zqzhz:162/proxy/: bar (200; 4.680211ms) May 25 21:17:33.340: INFO: (5) /api/v1/namespaces/proxy-3715/pods/proxy-service-fvxcc-zqzhz:162/proxy/: bar (200; 4.728228ms) May 25 21:17:33.340: INFO: (5) /api/v1/namespaces/proxy-3715/pods/http:proxy-service-fvxcc-zqzhz:1080/proxy/: ... (200; 4.833344ms) May 25 21:17:33.340: INFO: (5) /api/v1/namespaces/proxy-3715/pods/https:proxy-service-fvxcc-zqzhz:443/proxy/: test<... (200; 3.678938ms) May 25 21:17:33.347: INFO: (6) /api/v1/namespaces/proxy-3715/pods/http:proxy-service-fvxcc-zqzhz:1080/proxy/: ... (200; 3.81269ms) May 25 21:17:33.347: INFO: (6) /api/v1/namespaces/proxy-3715/pods/http:proxy-service-fvxcc-zqzhz:160/proxy/: foo (200; 3.766906ms) May 25 21:17:33.347: INFO: (6) /api/v1/namespaces/proxy-3715/pods/proxy-service-fvxcc-zqzhz:162/proxy/: bar (200; 3.75714ms) May 25 21:17:33.347: INFO: (6) /api/v1/namespaces/proxy-3715/pods/https:proxy-service-fvxcc-zqzhz:443/proxy/: test (200; 3.855949ms) May 25 21:17:33.347: INFO: (6) /api/v1/namespaces/proxy-3715/pods/http:proxy-service-fvxcc-zqzhz:162/proxy/: bar (200; 3.812076ms) May 25 21:17:33.347: INFO: (6) /api/v1/namespaces/proxy-3715/pods/proxy-service-fvxcc-zqzhz:160/proxy/: foo (200; 3.959965ms) May 25 21:17:33.349: INFO: (6) /api/v1/namespaces/proxy-3715/services/https:proxy-service-fvxcc:tlsportname1/proxy/: tls baz (200; 5.231872ms) May 25 21:17:33.349: INFO: (6) /api/v1/namespaces/proxy-3715/services/http:proxy-service-fvxcc:portname1/proxy/: foo (200; 5.4286ms) May 25 21:17:33.349: INFO: (6) /api/v1/namespaces/proxy-3715/services/https:proxy-service-fvxcc:tlsportname2/proxy/: tls qux (200; 5.212376ms) May 25 21:17:33.349: INFO: (6) /api/v1/namespaces/proxy-3715/services/proxy-service-fvxcc:portname1/proxy/: foo (200; 5.454342ms) May 25 21:17:33.349: INFO: (6) /api/v1/namespaces/proxy-3715/services/http:proxy-service-fvxcc:portname2/proxy/: bar (200; 5.57438ms) May 25 21:17:33.349: INFO: (6) /api/v1/namespaces/proxy-3715/services/proxy-service-fvxcc:portname2/proxy/: bar (200; 5.544185ms) May 25 21:17:33.354: INFO: (7) /api/v1/namespaces/proxy-3715/services/http:proxy-service-fvxcc:portname1/proxy/: foo (200; 5.236592ms) May 25 21:17:33.354: INFO: (7) /api/v1/namespaces/proxy-3715/services/http:proxy-service-fvxcc:portname2/proxy/: bar (200; 5.165691ms) May 25 21:17:33.354: INFO: (7) /api/v1/namespaces/proxy-3715/services/proxy-service-fvxcc:portname2/proxy/: bar (200; 5.31844ms) May 25 21:17:33.355: INFO: (7) /api/v1/namespaces/proxy-3715/services/https:proxy-service-fvxcc:tlsportname2/proxy/: tls qux (200; 5.573001ms) May 25 21:17:33.355: INFO: (7) /api/v1/namespaces/proxy-3715/services/proxy-service-fvxcc:portname1/proxy/: foo (200; 5.692142ms) May 25 21:17:33.355: INFO: (7) /api/v1/namespaces/proxy-3715/services/https:proxy-service-fvxcc:tlsportname1/proxy/: tls baz (200; 5.750083ms) May 25 21:17:33.355: INFO: (7) /api/v1/namespaces/proxy-3715/pods/http:proxy-service-fvxcc-zqzhz:160/proxy/: foo (200; 5.764761ms) May 25 21:17:33.355: INFO: (7) /api/v1/namespaces/proxy-3715/pods/https:proxy-service-fvxcc-zqzhz:460/proxy/: tls baz (200; 5.767737ms) May 25 21:17:33.355: INFO: (7) /api/v1/namespaces/proxy-3715/pods/https:proxy-service-fvxcc-zqzhz:462/proxy/: tls qux (200; 6.277677ms) May 25 21:17:33.356: INFO: (7) /api/v1/namespaces/proxy-3715/pods/proxy-service-fvxcc-zqzhz:160/proxy/: foo (200; 6.534821ms) May 25 21:17:33.356: INFO: (7) /api/v1/namespaces/proxy-3715/pods/proxy-service-fvxcc-zqzhz:162/proxy/: bar (200; 6.544908ms) May 25 21:17:33.356: INFO: (7) /api/v1/namespaces/proxy-3715/pods/proxy-service-fvxcc-zqzhz/proxy/: test (200; 6.526126ms) May 25 21:17:33.356: INFO: (7) /api/v1/namespaces/proxy-3715/pods/proxy-service-fvxcc-zqzhz:1080/proxy/: test<... (200; 6.503142ms) May 25 21:17:33.356: INFO: (7) /api/v1/namespaces/proxy-3715/pods/http:proxy-service-fvxcc-zqzhz:1080/proxy/: ... (200; 6.729282ms) May 25 21:17:33.356: INFO: (7) /api/v1/namespaces/proxy-3715/pods/https:proxy-service-fvxcc-zqzhz:443/proxy/: test (200; 3.646332ms) May 25 21:17:33.360: INFO: (8) /api/v1/namespaces/proxy-3715/services/proxy-service-fvxcc:portname1/proxy/: foo (200; 4.240784ms) May 25 21:17:33.360: INFO: (8) /api/v1/namespaces/proxy-3715/pods/http:proxy-service-fvxcc-zqzhz:162/proxy/: bar (200; 4.104199ms) May 25 21:17:33.360: INFO: (8) /api/v1/namespaces/proxy-3715/pods/proxy-service-fvxcc-zqzhz:1080/proxy/: test<... (200; 4.140387ms) May 25 21:17:33.360: INFO: (8) /api/v1/namespaces/proxy-3715/pods/https:proxy-service-fvxcc-zqzhz:462/proxy/: tls qux (200; 4.16565ms) May 25 21:17:33.360: INFO: (8) /api/v1/namespaces/proxy-3715/pods/proxy-service-fvxcc-zqzhz:162/proxy/: bar (200; 4.288351ms) May 25 21:17:33.360: INFO: (8) /api/v1/namespaces/proxy-3715/pods/proxy-service-fvxcc-zqzhz:160/proxy/: foo (200; 4.428416ms) May 25 21:17:33.361: INFO: (8) /api/v1/namespaces/proxy-3715/pods/https:proxy-service-fvxcc-zqzhz:443/proxy/: ... (200; 4.63953ms) May 25 21:17:33.361: INFO: (8) /api/v1/namespaces/proxy-3715/pods/http:proxy-service-fvxcc-zqzhz:160/proxy/: foo (200; 4.640097ms) May 25 21:17:33.361: INFO: (8) /api/v1/namespaces/proxy-3715/pods/https:proxy-service-fvxcc-zqzhz:460/proxy/: tls baz (200; 4.954311ms) May 25 21:17:33.368: INFO: (8) /api/v1/namespaces/proxy-3715/services/https:proxy-service-fvxcc:tlsportname1/proxy/: tls baz (200; 11.781315ms) May 25 21:17:33.368: INFO: (8) /api/v1/namespaces/proxy-3715/services/http:proxy-service-fvxcc:portname2/proxy/: bar (200; 11.751375ms) May 25 21:17:33.368: INFO: (8) /api/v1/namespaces/proxy-3715/services/https:proxy-service-fvxcc:tlsportname2/proxy/: tls qux (200; 11.989161ms) May 25 21:17:33.368: INFO: (8) /api/v1/namespaces/proxy-3715/services/http:proxy-service-fvxcc:portname1/proxy/: foo (200; 12.253303ms) May 25 21:17:33.368: INFO: (8) /api/v1/namespaces/proxy-3715/services/proxy-service-fvxcc:portname2/proxy/: bar (200; 12.192903ms) May 25 21:17:33.373: INFO: (9) /api/v1/namespaces/proxy-3715/pods/https:proxy-service-fvxcc-zqzhz:443/proxy/: test<... (200; 5.16411ms) May 25 21:17:33.374: INFO: (9) /api/v1/namespaces/proxy-3715/pods/http:proxy-service-fvxcc-zqzhz:1080/proxy/: ... (200; 5.132851ms) May 25 21:17:33.374: INFO: (9) /api/v1/namespaces/proxy-3715/services/http:proxy-service-fvxcc:portname1/proxy/: foo (200; 6.084313ms) May 25 21:17:33.375: INFO: (9) /api/v1/namespaces/proxy-3715/pods/proxy-service-fvxcc-zqzhz/proxy/: test (200; 6.370766ms) May 25 21:17:33.375: INFO: (9) /api/v1/namespaces/proxy-3715/pods/http:proxy-service-fvxcc-zqzhz:160/proxy/: foo (200; 6.355328ms) May 25 21:17:33.375: INFO: (9) /api/v1/namespaces/proxy-3715/services/proxy-service-fvxcc:portname2/proxy/: bar (200; 6.461287ms) May 25 21:17:33.375: INFO: (9) /api/v1/namespaces/proxy-3715/services/http:proxy-service-fvxcc:portname2/proxy/: bar (200; 6.338514ms) May 25 21:17:33.375: INFO: (9) /api/v1/namespaces/proxy-3715/services/https:proxy-service-fvxcc:tlsportname1/proxy/: tls baz (200; 6.508888ms) May 25 21:17:33.375: INFO: (9) /api/v1/namespaces/proxy-3715/pods/https:proxy-service-fvxcc-zqzhz:460/proxy/: tls baz (200; 6.427901ms) May 25 21:17:33.375: INFO: (9) /api/v1/namespaces/proxy-3715/pods/https:proxy-service-fvxcc-zqzhz:462/proxy/: tls qux (200; 6.468478ms) May 25 21:17:33.375: INFO: (9) /api/v1/namespaces/proxy-3715/services/https:proxy-service-fvxcc:tlsportname2/proxy/: tls qux (200; 6.497685ms) May 25 21:17:33.375: INFO: (9) /api/v1/namespaces/proxy-3715/services/proxy-service-fvxcc:portname1/proxy/: foo (200; 6.441742ms) May 25 21:17:33.380: INFO: (10) /api/v1/namespaces/proxy-3715/pods/https:proxy-service-fvxcc-zqzhz:443/proxy/: test<... (200; 6.776205ms) May 25 21:17:33.382: INFO: (10) /api/v1/namespaces/proxy-3715/pods/proxy-service-fvxcc-zqzhz/proxy/: test (200; 6.795982ms) May 25 21:17:33.382: INFO: (10) /api/v1/namespaces/proxy-3715/pods/http:proxy-service-fvxcc-zqzhz:1080/proxy/: ... (200; 6.785298ms) May 25 21:17:33.382: INFO: (10) /api/v1/namespaces/proxy-3715/pods/http:proxy-service-fvxcc-zqzhz:160/proxy/: foo (200; 6.854229ms) May 25 21:17:33.382: INFO: (10) /api/v1/namespaces/proxy-3715/services/proxy-service-fvxcc:portname1/proxy/: foo (200; 6.745168ms) May 25 21:17:33.382: INFO: (10) /api/v1/namespaces/proxy-3715/pods/https:proxy-service-fvxcc-zqzhz:460/proxy/: tls baz (200; 7.355596ms) May 25 21:17:33.382: INFO: (10) /api/v1/namespaces/proxy-3715/services/http:proxy-service-fvxcc:portname2/proxy/: bar (200; 7.128756ms) May 25 21:17:33.382: INFO: (10) /api/v1/namespaces/proxy-3715/pods/proxy-service-fvxcc-zqzhz:162/proxy/: bar (200; 7.21099ms) May 25 21:17:33.382: INFO: (10) /api/v1/namespaces/proxy-3715/services/proxy-service-fvxcc:portname2/proxy/: bar (200; 7.239007ms) May 25 21:17:33.382: INFO: (10) /api/v1/namespaces/proxy-3715/services/https:proxy-service-fvxcc:tlsportname2/proxy/: tls qux (200; 7.338598ms) May 25 21:17:33.382: INFO: (10) /api/v1/namespaces/proxy-3715/pods/http:proxy-service-fvxcc-zqzhz:162/proxy/: bar (200; 7.308982ms) May 25 21:17:33.385: INFO: (11) /api/v1/namespaces/proxy-3715/pods/http:proxy-service-fvxcc-zqzhz:1080/proxy/: ... (200; 2.705494ms) May 25 21:17:33.385: INFO: (11) /api/v1/namespaces/proxy-3715/pods/proxy-service-fvxcc-zqzhz:1080/proxy/: test<... (200; 2.786619ms) May 25 21:17:33.386: INFO: (11) /api/v1/namespaces/proxy-3715/pods/proxy-service-fvxcc-zqzhz:160/proxy/: foo (200; 3.042552ms) May 25 21:17:33.386: INFO: (11) /api/v1/namespaces/proxy-3715/pods/https:proxy-service-fvxcc-zqzhz:443/proxy/: test (200; 3.059997ms) May 25 21:17:33.386: INFO: (11) /api/v1/namespaces/proxy-3715/pods/proxy-service-fvxcc-zqzhz:162/proxy/: bar (200; 3.117656ms) May 25 21:17:33.386: INFO: (11) /api/v1/namespaces/proxy-3715/services/https:proxy-service-fvxcc:tlsportname1/proxy/: tls baz (200; 3.729208ms) May 25 21:17:33.386: INFO: (11) /api/v1/namespaces/proxy-3715/services/proxy-service-fvxcc:portname1/proxy/: foo (200; 3.680074ms) May 25 21:17:33.386: INFO: (11) /api/v1/namespaces/proxy-3715/pods/http:proxy-service-fvxcc-zqzhz:162/proxy/: bar (200; 3.631556ms) May 25 21:17:33.386: INFO: (11) /api/v1/namespaces/proxy-3715/services/https:proxy-service-fvxcc:tlsportname2/proxy/: tls qux (200; 3.757659ms) May 25 21:17:33.386: INFO: (11) /api/v1/namespaces/proxy-3715/pods/https:proxy-service-fvxcc-zqzhz:462/proxy/: tls qux (200; 3.740737ms) May 25 21:17:33.386: INFO: (11) /api/v1/namespaces/proxy-3715/services/proxy-service-fvxcc:portname2/proxy/: bar (200; 3.929854ms) May 25 21:17:33.386: INFO: (11) /api/v1/namespaces/proxy-3715/services/http:proxy-service-fvxcc:portname1/proxy/: foo (200; 3.855335ms) May 25 21:17:33.386: INFO: (11) /api/v1/namespaces/proxy-3715/pods/http:proxy-service-fvxcc-zqzhz:160/proxy/: foo (200; 4.058479ms) May 25 21:17:33.386: INFO: (11) /api/v1/namespaces/proxy-3715/services/http:proxy-service-fvxcc:portname2/proxy/: bar (200; 3.968511ms) May 25 21:17:33.387: INFO: (11) /api/v1/namespaces/proxy-3715/pods/https:proxy-service-fvxcc-zqzhz:460/proxy/: tls baz (200; 4.072478ms) May 25 21:17:33.389: INFO: (12) /api/v1/namespaces/proxy-3715/pods/http:proxy-service-fvxcc-zqzhz:162/proxy/: bar (200; 2.916387ms) May 25 21:17:33.389: INFO: (12) /api/v1/namespaces/proxy-3715/pods/https:proxy-service-fvxcc-zqzhz:460/proxy/: tls baz (200; 2.923795ms) May 25 21:17:33.390: INFO: (12) /api/v1/namespaces/proxy-3715/pods/proxy-service-fvxcc-zqzhz/proxy/: test (200; 2.769879ms) May 25 21:17:33.390: INFO: (12) /api/v1/namespaces/proxy-3715/pods/https:proxy-service-fvxcc-zqzhz:462/proxy/: tls qux (200; 2.981587ms) May 25 21:17:33.390: INFO: (12) /api/v1/namespaces/proxy-3715/pods/http:proxy-service-fvxcc-zqzhz:1080/proxy/: ... (200; 3.551137ms) May 25 21:17:33.391: INFO: (12) /api/v1/namespaces/proxy-3715/pods/https:proxy-service-fvxcc-zqzhz:443/proxy/: test<... (200; 4.171443ms) May 25 21:17:33.391: INFO: (12) /api/v1/namespaces/proxy-3715/pods/proxy-service-fvxcc-zqzhz:162/proxy/: bar (200; 4.132818ms) May 25 21:17:33.391: INFO: (12) /api/v1/namespaces/proxy-3715/services/https:proxy-service-fvxcc:tlsportname2/proxy/: tls qux (200; 4.388858ms) May 25 21:17:33.391: INFO: (12) /api/v1/namespaces/proxy-3715/services/http:proxy-service-fvxcc:portname1/proxy/: foo (200; 4.717911ms) May 25 21:17:33.391: INFO: (12) /api/v1/namespaces/proxy-3715/services/proxy-service-fvxcc:portname2/proxy/: bar (200; 4.784782ms) May 25 21:17:33.391: INFO: (12) /api/v1/namespaces/proxy-3715/services/proxy-service-fvxcc:portname1/proxy/: foo (200; 4.823907ms) May 25 21:17:33.391: INFO: (12) /api/v1/namespaces/proxy-3715/services/https:proxy-service-fvxcc:tlsportname1/proxy/: tls baz (200; 4.864314ms) May 25 21:17:33.391: INFO: (12) /api/v1/namespaces/proxy-3715/services/http:proxy-service-fvxcc:portname2/proxy/: bar (200; 4.871043ms) May 25 21:17:33.394: INFO: (13) /api/v1/namespaces/proxy-3715/pods/https:proxy-service-fvxcc-zqzhz:460/proxy/: tls baz (200; 2.781259ms) May 25 21:17:33.394: INFO: (13) /api/v1/namespaces/proxy-3715/pods/https:proxy-service-fvxcc-zqzhz:443/proxy/: test (200; 3.869245ms) May 25 21:17:33.395: INFO: (13) /api/v1/namespaces/proxy-3715/pods/http:proxy-service-fvxcc-zqzhz:1080/proxy/: ... (200; 3.851733ms) May 25 21:17:33.396: INFO: (13) /api/v1/namespaces/proxy-3715/pods/http:proxy-service-fvxcc-zqzhz:162/proxy/: bar (200; 4.239392ms) May 25 21:17:33.396: INFO: (13) /api/v1/namespaces/proxy-3715/pods/proxy-service-fvxcc-zqzhz:1080/proxy/: test<... (200; 4.232038ms) May 25 21:17:33.396: INFO: (13) /api/v1/namespaces/proxy-3715/services/proxy-service-fvxcc:portname1/proxy/: foo (200; 4.491179ms) May 25 21:17:33.396: INFO: (13) /api/v1/namespaces/proxy-3715/pods/proxy-service-fvxcc-zqzhz:162/proxy/: bar (200; 4.518745ms) May 25 21:17:33.396: INFO: (13) /api/v1/namespaces/proxy-3715/services/http:proxy-service-fvxcc:portname1/proxy/: foo (200; 4.619159ms) May 25 21:17:33.396: INFO: (13) /api/v1/namespaces/proxy-3715/services/https:proxy-service-fvxcc:tlsportname2/proxy/: tls qux (200; 4.771216ms) May 25 21:17:33.397: INFO: (13) /api/v1/namespaces/proxy-3715/services/http:proxy-service-fvxcc:portname2/proxy/: bar (200; 5.610187ms) May 25 21:17:33.400: INFO: (14) /api/v1/namespaces/proxy-3715/pods/http:proxy-service-fvxcc-zqzhz:160/proxy/: foo (200; 2.582617ms) May 25 21:17:33.400: INFO: (14) /api/v1/namespaces/proxy-3715/pods/proxy-service-fvxcc-zqzhz:1080/proxy/: test<... (200; 2.915661ms) May 25 21:17:33.401: INFO: (14) /api/v1/namespaces/proxy-3715/pods/proxy-service-fvxcc-zqzhz:162/proxy/: bar (200; 3.841293ms) May 25 21:17:33.401: INFO: (14) /api/v1/namespaces/proxy-3715/pods/proxy-service-fvxcc-zqzhz/proxy/: test (200; 3.891857ms) May 25 21:17:33.401: INFO: (14) /api/v1/namespaces/proxy-3715/pods/http:proxy-service-fvxcc-zqzhz:162/proxy/: bar (200; 3.890349ms) May 25 21:17:33.401: INFO: (14) /api/v1/namespaces/proxy-3715/pods/http:proxy-service-fvxcc-zqzhz:1080/proxy/: ... (200; 3.913252ms) May 25 21:17:33.401: INFO: (14) /api/v1/namespaces/proxy-3715/pods/https:proxy-service-fvxcc-zqzhz:460/proxy/: tls baz (200; 3.954377ms) May 25 21:17:33.401: INFO: (14) /api/v1/namespaces/proxy-3715/pods/proxy-service-fvxcc-zqzhz:160/proxy/: foo (200; 4.014145ms) May 25 21:17:33.401: INFO: (14) /api/v1/namespaces/proxy-3715/pods/https:proxy-service-fvxcc-zqzhz:443/proxy/: test<... (200; 5.827363ms) May 25 21:17:33.408: INFO: (15) /api/v1/namespaces/proxy-3715/pods/http:proxy-service-fvxcc-zqzhz:162/proxy/: bar (200; 5.802973ms) May 25 21:17:33.409: INFO: (15) /api/v1/namespaces/proxy-3715/services/https:proxy-service-fvxcc:tlsportname1/proxy/: tls baz (200; 5.87345ms) May 25 21:17:33.409: INFO: (15) /api/v1/namespaces/proxy-3715/pods/http:proxy-service-fvxcc-zqzhz:160/proxy/: foo (200; 6.326368ms) May 25 21:17:33.409: INFO: (15) /api/v1/namespaces/proxy-3715/pods/proxy-service-fvxcc-zqzhz:160/proxy/: foo (200; 6.395613ms) May 25 21:17:33.409: INFO: (15) /api/v1/namespaces/proxy-3715/services/http:proxy-service-fvxcc:portname2/proxy/: bar (200; 6.430374ms) May 25 21:17:33.409: INFO: (15) /api/v1/namespaces/proxy-3715/pods/http:proxy-service-fvxcc-zqzhz:1080/proxy/: ... (200; 6.572566ms) May 25 21:17:33.409: INFO: (15) /api/v1/namespaces/proxy-3715/pods/https:proxy-service-fvxcc-zqzhz:462/proxy/: tls qux (200; 6.637024ms) May 25 21:17:33.409: INFO: (15) /api/v1/namespaces/proxy-3715/pods/proxy-service-fvxcc-zqzhz/proxy/: test (200; 6.586093ms) May 25 21:17:33.409: INFO: (15) /api/v1/namespaces/proxy-3715/pods/https:proxy-service-fvxcc-zqzhz:443/proxy/: test (200; 2.906108ms) May 25 21:17:33.414: INFO: (16) /api/v1/namespaces/proxy-3715/pods/proxy-service-fvxcc-zqzhz:1080/proxy/: test<... (200; 3.12513ms) May 25 21:17:33.414: INFO: (16) /api/v1/namespaces/proxy-3715/pods/https:proxy-service-fvxcc-zqzhz:462/proxy/: tls qux (200; 3.027837ms) May 25 21:17:33.414: INFO: (16) /api/v1/namespaces/proxy-3715/pods/https:proxy-service-fvxcc-zqzhz:443/proxy/: ... (200; 4.144063ms) May 25 21:17:33.415: INFO: (16) /api/v1/namespaces/proxy-3715/services/proxy-service-fvxcc:portname2/proxy/: bar (200; 4.255506ms) May 25 21:17:33.415: INFO: (16) /api/v1/namespaces/proxy-3715/services/http:proxy-service-fvxcc:portname1/proxy/: foo (200; 4.150828ms) May 25 21:17:33.415: INFO: (16) /api/v1/namespaces/proxy-3715/pods/proxy-service-fvxcc-zqzhz:160/proxy/: foo (200; 4.1257ms) May 25 21:17:33.415: INFO: (16) /api/v1/namespaces/proxy-3715/services/https:proxy-service-fvxcc:tlsportname2/proxy/: tls qux (200; 4.169105ms) May 25 21:17:33.415: INFO: (16) /api/v1/namespaces/proxy-3715/pods/proxy-service-fvxcc-zqzhz:162/proxy/: bar (200; 4.220245ms) May 25 21:17:33.415: INFO: (16) /api/v1/namespaces/proxy-3715/pods/http:proxy-service-fvxcc-zqzhz:162/proxy/: bar (200; 4.220685ms) May 25 21:17:33.417: INFO: (17) /api/v1/namespaces/proxy-3715/pods/http:proxy-service-fvxcc-zqzhz:162/proxy/: bar (200; 2.662137ms) May 25 21:17:33.418: INFO: (17) /api/v1/namespaces/proxy-3715/pods/proxy-service-fvxcc-zqzhz/proxy/: test (200; 2.826694ms) May 25 21:17:33.418: INFO: (17) /api/v1/namespaces/proxy-3715/pods/https:proxy-service-fvxcc-zqzhz:462/proxy/: tls qux (200; 2.813659ms) May 25 21:17:33.418: INFO: (17) /api/v1/namespaces/proxy-3715/pods/proxy-service-fvxcc-zqzhz:160/proxy/: foo (200; 2.989392ms) May 25 21:17:33.418: INFO: (17) /api/v1/namespaces/proxy-3715/pods/https:proxy-service-fvxcc-zqzhz:443/proxy/: test<... (200; 3.026858ms) May 25 21:17:33.418: INFO: (17) /api/v1/namespaces/proxy-3715/pods/https:proxy-service-fvxcc-zqzhz:460/proxy/: tls baz (200; 3.248761ms) May 25 21:17:33.418: INFO: (17) /api/v1/namespaces/proxy-3715/pods/http:proxy-service-fvxcc-zqzhz:160/proxy/: foo (200; 3.232287ms) May 25 21:17:33.419: INFO: (17) /api/v1/namespaces/proxy-3715/pods/proxy-service-fvxcc-zqzhz:162/proxy/: bar (200; 3.678457ms) May 25 21:17:33.419: INFO: (17) /api/v1/namespaces/proxy-3715/pods/http:proxy-service-fvxcc-zqzhz:1080/proxy/: ... (200; 3.740677ms) May 25 21:17:33.419: INFO: (17) /api/v1/namespaces/proxy-3715/services/proxy-service-fvxcc:portname2/proxy/: bar (200; 4.56276ms) May 25 21:17:33.419: INFO: (17) /api/v1/namespaces/proxy-3715/services/https:proxy-service-fvxcc:tlsportname1/proxy/: tls baz (200; 4.614084ms) May 25 21:17:33.419: INFO: (17) /api/v1/namespaces/proxy-3715/services/http:proxy-service-fvxcc:portname2/proxy/: bar (200; 4.555668ms) May 25 21:17:33.420: INFO: (17) /api/v1/namespaces/proxy-3715/services/proxy-service-fvxcc:portname1/proxy/: foo (200; 4.765869ms) May 25 21:17:33.420: INFO: (17) /api/v1/namespaces/proxy-3715/services/http:proxy-service-fvxcc:portname1/proxy/: foo (200; 4.928152ms) May 25 21:17:33.420: INFO: (17) /api/v1/namespaces/proxy-3715/services/https:proxy-service-fvxcc:tlsportname2/proxy/: tls qux (200; 4.960669ms) May 25 21:17:33.424: INFO: (18) /api/v1/namespaces/proxy-3715/pods/https:proxy-service-fvxcc-zqzhz:443/proxy/: test<... (200; 4.498368ms) May 25 21:17:33.424: INFO: (18) /api/v1/namespaces/proxy-3715/pods/http:proxy-service-fvxcc-zqzhz:1080/proxy/: ... (200; 4.534467ms) May 25 21:17:33.424: INFO: (18) /api/v1/namespaces/proxy-3715/pods/proxy-service-fvxcc-zqzhz/proxy/: test (200; 4.482725ms) May 25 21:17:33.424: INFO: (18) /api/v1/namespaces/proxy-3715/pods/http:proxy-service-fvxcc-zqzhz:160/proxy/: foo (200; 4.542784ms) May 25 21:17:33.425: INFO: (18) /api/v1/namespaces/proxy-3715/services/http:proxy-service-fvxcc:portname1/proxy/: foo (200; 5.585101ms) May 25 21:17:33.425: INFO: (18) /api/v1/namespaces/proxy-3715/services/http:proxy-service-fvxcc:portname2/proxy/: bar (200; 5.603612ms) May 25 21:17:33.425: INFO: (18) /api/v1/namespaces/proxy-3715/services/proxy-service-fvxcc:portname2/proxy/: bar (200; 5.594275ms) May 25 21:17:33.425: INFO: (18) /api/v1/namespaces/proxy-3715/services/https:proxy-service-fvxcc:tlsportname1/proxy/: tls baz (200; 5.639797ms) May 25 21:17:33.425: INFO: (18) /api/v1/namespaces/proxy-3715/services/proxy-service-fvxcc:portname1/proxy/: foo (200; 5.601441ms) May 25 21:17:33.426: INFO: (18) /api/v1/namespaces/proxy-3715/services/https:proxy-service-fvxcc:tlsportname2/proxy/: tls qux (200; 5.712914ms) May 25 21:17:33.429: INFO: (19) /api/v1/namespaces/proxy-3715/pods/https:proxy-service-fvxcc-zqzhz:462/proxy/: tls qux (200; 3.142984ms) May 25 21:17:33.429: INFO: (19) /api/v1/namespaces/proxy-3715/pods/proxy-service-fvxcc-zqzhz:160/proxy/: foo (200; 3.201866ms) May 25 21:17:33.429: INFO: (19) /api/v1/namespaces/proxy-3715/pods/proxy-service-fvxcc-zqzhz:1080/proxy/: test<... (200; 3.738126ms) May 25 21:17:33.429: INFO: (19) /api/v1/namespaces/proxy-3715/pods/https:proxy-service-fvxcc-zqzhz:460/proxy/: tls baz (200; 3.83053ms) May 25 21:17:33.429: INFO: (19) /api/v1/namespaces/proxy-3715/services/https:proxy-service-fvxcc:tlsportname1/proxy/: tls baz (200; 3.901131ms) May 25 21:17:33.429: INFO: (19) /api/v1/namespaces/proxy-3715/services/proxy-service-fvxcc:portname1/proxy/: foo (200; 3.837241ms) May 25 21:17:33.429: INFO: (19) /api/v1/namespaces/proxy-3715/pods/http:proxy-service-fvxcc-zqzhz:160/proxy/: foo (200; 3.800533ms) May 25 21:17:33.430: INFO: (19) /api/v1/namespaces/proxy-3715/pods/http:proxy-service-fvxcc-zqzhz:162/proxy/: bar (200; 3.888067ms) May 25 21:17:33.430: INFO: (19) /api/v1/namespaces/proxy-3715/pods/proxy-service-fvxcc-zqzhz:162/proxy/: bar (200; 4.241976ms) May 25 21:17:33.430: INFO: (19) /api/v1/namespaces/proxy-3715/services/http:proxy-service-fvxcc:portname1/proxy/: foo (200; 4.193528ms) May 25 21:17:33.430: INFO: (19) /api/v1/namespaces/proxy-3715/services/http:proxy-service-fvxcc:portname2/proxy/: bar (200; 4.252866ms) May 25 21:17:33.430: INFO: (19) /api/v1/namespaces/proxy-3715/pods/proxy-service-fvxcc-zqzhz/proxy/: test (200; 4.279382ms) May 25 21:17:33.430: INFO: (19) /api/v1/namespaces/proxy-3715/services/proxy-service-fvxcc:portname2/proxy/: bar (200; 4.47827ms) May 25 21:17:33.430: INFO: (19) /api/v1/namespaces/proxy-3715/services/https:proxy-service-fvxcc:tlsportname2/proxy/: tls qux (200; 4.43078ms) May 25 21:17:33.430: INFO: (19) /api/v1/namespaces/proxy-3715/pods/http:proxy-service-fvxcc-zqzhz:1080/proxy/: ... (200; 4.504127ms) May 25 21:17:33.430: INFO: (19) /api/v1/namespaces/proxy-3715/pods/https:proxy-service-fvxcc-zqzhz:443/proxy/: >> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 25 21:17:40.404: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 25 21:17:42.416: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726038260, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726038260, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726038260, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726038260, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 25 21:17:44.430: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726038260, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726038260, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726038260, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726038260, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 25 21:17:47.478: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:17:47.644: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7415" for this suite. STEP: Destroying namespace "webhook-7415-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.567 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":278,"completed":28,"skipped":563,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:17:47.867: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:17:53.017: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-6898" for this suite. • [SLOW TEST:5.160 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":278,"completed":29,"skipped":603,"failed":0} SS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:17:53.027: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-6998 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a new StatefulSet May 25 21:17:53.135: INFO: Found 0 stateful pods, waiting for 3 May 25 21:18:03.140: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 25 21:18:03.140: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 25 21:18:03.140: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false May 25 21:18:13.140: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 25 21:18:13.140: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 25 21:18:13.140: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true May 25 21:18:13.151: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6998 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 25 21:18:13.514: INFO: stderr: "I0525 21:18:13.308986 292 log.go:172] (0xc000734c60) (0xc000365cc0) Create stream\nI0525 21:18:13.309049 292 log.go:172] (0xc000734c60) (0xc000365cc0) Stream added, broadcasting: 1\nI0525 21:18:13.311398 292 log.go:172] (0xc000734c60) Reply frame received for 1\nI0525 21:18:13.311441 292 log.go:172] (0xc000734c60) (0xc000792000) Create stream\nI0525 21:18:13.311451 292 log.go:172] (0xc000734c60) (0xc000792000) Stream added, broadcasting: 3\nI0525 21:18:13.312437 292 log.go:172] (0xc000734c60) Reply frame received for 3\nI0525 21:18:13.312483 292 log.go:172] (0xc000734c60) (0xc000365d60) Create stream\nI0525 21:18:13.312495 292 log.go:172] (0xc000734c60) (0xc000365d60) Stream added, broadcasting: 5\nI0525 21:18:13.313651 292 log.go:172] (0xc000734c60) Reply frame received for 5\nI0525 21:18:13.418949 292 log.go:172] (0xc000734c60) Data frame received for 5\nI0525 21:18:13.418993 292 log.go:172] (0xc000365d60) (5) Data frame handling\nI0525 21:18:13.419022 292 log.go:172] (0xc000365d60) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0525 21:18:13.503935 292 log.go:172] (0xc000734c60) Data frame received for 3\nI0525 21:18:13.503973 292 log.go:172] (0xc000792000) (3) Data frame handling\nI0525 21:18:13.503987 292 log.go:172] (0xc000792000) (3) Data frame sent\nI0525 21:18:13.504387 292 log.go:172] (0xc000734c60) Data frame received for 5\nI0525 21:18:13.504416 292 log.go:172] (0xc000365d60) (5) Data frame handling\nI0525 21:18:13.504442 292 log.go:172] (0xc000734c60) Data frame received for 3\nI0525 21:18:13.504452 292 log.go:172] (0xc000792000) (3) Data frame handling\nI0525 21:18:13.506655 292 log.go:172] (0xc000734c60) Data frame received for 1\nI0525 21:18:13.506674 292 log.go:172] (0xc000365cc0) (1) Data frame handling\nI0525 21:18:13.506688 292 log.go:172] (0xc000365cc0) (1) Data frame sent\nI0525 21:18:13.506702 292 log.go:172] (0xc000734c60) (0xc000365cc0) Stream removed, broadcasting: 1\nI0525 21:18:13.506719 292 log.go:172] (0xc000734c60) Go away received\nI0525 21:18:13.507330 292 log.go:172] (0xc000734c60) (0xc000365cc0) Stream removed, broadcasting: 1\nI0525 21:18:13.507361 292 log.go:172] (0xc000734c60) (0xc000792000) Stream removed, broadcasting: 3\nI0525 21:18:13.507374 292 log.go:172] (0xc000734c60) (0xc000365d60) Stream removed, broadcasting: 5\n" May 25 21:18:13.514: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 25 21:18:13.514: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine May 25 21:18:23.551: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order May 25 21:18:33.579: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6998 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 25 21:18:33.801: INFO: stderr: "I0525 21:18:33.705310 315 log.go:172] (0xc000942a50) (0xc0005d3c20) Create stream\nI0525 21:18:33.705378 315 log.go:172] (0xc000942a50) (0xc0005d3c20) Stream added, broadcasting: 1\nI0525 21:18:33.708079 315 log.go:172] (0xc000942a50) Reply frame received for 1\nI0525 21:18:33.708143 315 log.go:172] (0xc000942a50) (0xc0007f4780) Create stream\nI0525 21:18:33.708165 315 log.go:172] (0xc000942a50) (0xc0007f4780) Stream added, broadcasting: 3\nI0525 21:18:33.709508 315 log.go:172] (0xc000942a50) Reply frame received for 3\nI0525 21:18:33.709573 315 log.go:172] (0xc000942a50) (0xc0009a2000) Create stream\nI0525 21:18:33.709605 315 log.go:172] (0xc000942a50) (0xc0009a2000) Stream added, broadcasting: 5\nI0525 21:18:33.710943 315 log.go:172] (0xc000942a50) Reply frame received for 5\nI0525 21:18:33.792681 315 log.go:172] (0xc000942a50) Data frame received for 5\nI0525 21:18:33.792724 315 log.go:172] (0xc0009a2000) (5) Data frame handling\nI0525 21:18:33.792736 315 log.go:172] (0xc0009a2000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0525 21:18:33.792755 315 log.go:172] (0xc000942a50) Data frame received for 3\nI0525 21:18:33.792786 315 log.go:172] (0xc0007f4780) (3) Data frame handling\nI0525 21:18:33.792800 315 log.go:172] (0xc0007f4780) (3) Data frame sent\nI0525 21:18:33.792826 315 log.go:172] (0xc000942a50) Data frame received for 5\nI0525 21:18:33.792840 315 log.go:172] (0xc0009a2000) (5) Data frame handling\nI0525 21:18:33.793655 315 log.go:172] (0xc000942a50) Data frame received for 3\nI0525 21:18:33.793690 315 log.go:172] (0xc0007f4780) (3) Data frame handling\nI0525 21:18:33.794868 315 log.go:172] (0xc000942a50) Data frame received for 1\nI0525 21:18:33.794885 315 log.go:172] (0xc0005d3c20) (1) Data frame handling\nI0525 21:18:33.794892 315 log.go:172] (0xc0005d3c20) (1) Data frame sent\nI0525 21:18:33.794901 315 log.go:172] (0xc000942a50) (0xc0005d3c20) Stream removed, broadcasting: 1\nI0525 21:18:33.794951 315 log.go:172] (0xc000942a50) Go away received\nI0525 21:18:33.795173 315 log.go:172] (0xc000942a50) (0xc0005d3c20) Stream removed, broadcasting: 1\nI0525 21:18:33.795189 315 log.go:172] (0xc000942a50) (0xc0007f4780) Stream removed, broadcasting: 3\nI0525 21:18:33.795195 315 log.go:172] (0xc000942a50) (0xc0009a2000) Stream removed, broadcasting: 5\n" May 25 21:18:33.801: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 25 21:18:33.801: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 25 21:18:53.819: INFO: Waiting for StatefulSet statefulset-6998/ss2 to complete update May 25 21:18:53.819: INFO: Waiting for Pod statefulset-6998/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Rolling back to a previous revision May 25 21:19:03.828: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6998 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 25 21:19:04.108: INFO: stderr: "I0525 21:19:03.958321 337 log.go:172] (0xc0009c7130) (0xc000a04820) Create stream\nI0525 21:19:03.958372 337 log.go:172] (0xc0009c7130) (0xc000a04820) Stream added, broadcasting: 1\nI0525 21:19:03.964444 337 log.go:172] (0xc0009c7130) Reply frame received for 1\nI0525 21:19:03.964478 337 log.go:172] (0xc0009c7130) (0xc00062c6e0) Create stream\nI0525 21:19:03.964487 337 log.go:172] (0xc0009c7130) (0xc00062c6e0) Stream added, broadcasting: 3\nI0525 21:19:03.965791 337 log.go:172] (0xc0009c7130) Reply frame received for 3\nI0525 21:19:03.965824 337 log.go:172] (0xc0009c7130) (0xc0007514a0) Create stream\nI0525 21:19:03.965834 337 log.go:172] (0xc0009c7130) (0xc0007514a0) Stream added, broadcasting: 5\nI0525 21:19:03.967068 337 log.go:172] (0xc0009c7130) Reply frame received for 5\nI0525 21:19:04.066948 337 log.go:172] (0xc0009c7130) Data frame received for 5\nI0525 21:19:04.066974 337 log.go:172] (0xc0007514a0) (5) Data frame handling\nI0525 21:19:04.066991 337 log.go:172] (0xc0007514a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0525 21:19:04.099642 337 log.go:172] (0xc0009c7130) Data frame received for 3\nI0525 21:19:04.099654 337 log.go:172] (0xc00062c6e0) (3) Data frame handling\nI0525 21:19:04.099660 337 log.go:172] (0xc00062c6e0) (3) Data frame sent\nI0525 21:19:04.100000 337 log.go:172] (0xc0009c7130) Data frame received for 5\nI0525 21:19:04.100024 337 log.go:172] (0xc0007514a0) (5) Data frame handling\nI0525 21:19:04.100046 337 log.go:172] (0xc0009c7130) Data frame received for 3\nI0525 21:19:04.100057 337 log.go:172] (0xc00062c6e0) (3) Data frame handling\nI0525 21:19:04.101754 337 log.go:172] (0xc0009c7130) Data frame received for 1\nI0525 21:19:04.101783 337 log.go:172] (0xc000a04820) (1) Data frame handling\nI0525 21:19:04.101819 337 log.go:172] (0xc000a04820) (1) Data frame sent\nI0525 21:19:04.101936 337 log.go:172] (0xc0009c7130) (0xc000a04820) Stream removed, broadcasting: 1\nI0525 21:19:04.102001 337 log.go:172] (0xc0009c7130) Go away received\nI0525 21:19:04.102457 337 log.go:172] (0xc0009c7130) (0xc000a04820) Stream removed, broadcasting: 1\nI0525 21:19:04.102504 337 log.go:172] (0xc0009c7130) (0xc00062c6e0) Stream removed, broadcasting: 3\nI0525 21:19:04.102518 337 log.go:172] (0xc0009c7130) (0xc0007514a0) Stream removed, broadcasting: 5\n" May 25 21:19:04.108: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 25 21:19:04.108: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 25 21:19:14.144: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order May 25 21:19:24.173: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6998 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 25 21:19:24.419: INFO: stderr: "I0525 21:19:24.310040 358 log.go:172] (0xc000548dc0) (0xc0008e8000) Create stream\nI0525 21:19:24.310096 358 log.go:172] (0xc000548dc0) (0xc0008e8000) Stream added, broadcasting: 1\nI0525 21:19:24.312563 358 log.go:172] (0xc000548dc0) Reply frame received for 1\nI0525 21:19:24.312597 358 log.go:172] (0xc000548dc0) (0xc0006cf9a0) Create stream\nI0525 21:19:24.312607 358 log.go:172] (0xc000548dc0) (0xc0006cf9a0) Stream added, broadcasting: 3\nI0525 21:19:24.313660 358 log.go:172] (0xc000548dc0) Reply frame received for 3\nI0525 21:19:24.313704 358 log.go:172] (0xc000548dc0) (0xc0006cfb80) Create stream\nI0525 21:19:24.313720 358 log.go:172] (0xc000548dc0) (0xc0006cfb80) Stream added, broadcasting: 5\nI0525 21:19:24.314671 358 log.go:172] (0xc000548dc0) Reply frame received for 5\nI0525 21:19:24.409887 358 log.go:172] (0xc000548dc0) Data frame received for 3\nI0525 21:19:24.409932 358 log.go:172] (0xc0006cf9a0) (3) Data frame handling\nI0525 21:19:24.409944 358 log.go:172] (0xc0006cf9a0) (3) Data frame sent\nI0525 21:19:24.409953 358 log.go:172] (0xc000548dc0) Data frame received for 3\nI0525 21:19:24.409960 358 log.go:172] (0xc0006cf9a0) (3) Data frame handling\nI0525 21:19:24.409988 358 log.go:172] (0xc000548dc0) Data frame received for 5\nI0525 21:19:24.410005 358 log.go:172] (0xc0006cfb80) (5) Data frame handling\nI0525 21:19:24.410018 358 log.go:172] (0xc0006cfb80) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0525 21:19:24.410163 358 log.go:172] (0xc000548dc0) Data frame received for 5\nI0525 21:19:24.410178 358 log.go:172] (0xc0006cfb80) (5) Data frame handling\nI0525 21:19:24.412326 358 log.go:172] (0xc000548dc0) Data frame received for 1\nI0525 21:19:24.412425 358 log.go:172] (0xc0008e8000) (1) Data frame handling\nI0525 21:19:24.412508 358 log.go:172] (0xc0008e8000) (1) Data frame sent\nI0525 21:19:24.412558 358 log.go:172] (0xc000548dc0) (0xc0008e8000) Stream removed, broadcasting: 1\nI0525 21:19:24.412617 358 log.go:172] (0xc000548dc0) Go away received\nI0525 21:19:24.413082 358 log.go:172] (0xc000548dc0) (0xc0008e8000) Stream removed, broadcasting: 1\nI0525 21:19:24.413105 358 log.go:172] (0xc000548dc0) (0xc0006cf9a0) Stream removed, broadcasting: 3\nI0525 21:19:24.413338 358 log.go:172] (0xc000548dc0) (0xc0006cfb80) Stream removed, broadcasting: 5\n" May 25 21:19:24.419: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 25 21:19:24.419: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 25 21:19:34.438: INFO: Waiting for StatefulSet statefulset-6998/ss2 to complete update May 25 21:19:34.438: INFO: Waiting for Pod statefulset-6998/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 May 25 21:19:34.438: INFO: Waiting for Pod statefulset-6998/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 May 25 21:19:34.438: INFO: Waiting for Pod statefulset-6998/ss2-2 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 May 25 21:19:44.445: INFO: Waiting for StatefulSet statefulset-6998/ss2 to complete update May 25 21:19:44.445: INFO: Waiting for Pod statefulset-6998/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 May 25 21:19:44.445: INFO: Waiting for Pod statefulset-6998/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 May 25 21:19:54.447: INFO: Waiting for StatefulSet statefulset-6998/ss2 to complete update May 25 21:19:54.447: INFO: Waiting for Pod statefulset-6998/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 May 25 21:20:04.445: INFO: Deleting all statefulset in ns statefulset-6998 May 25 21:20:04.447: INFO: Scaling statefulset ss2 to 0 May 25 21:20:34.471: INFO: Waiting for statefulset status.replicas updated to 0 May 25 21:20:34.474: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:20:34.487: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-6998" for this suite. • [SLOW TEST:161.479 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":278,"completed":30,"skipped":605,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:20:34.507: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 25 21:20:35.135: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 25 21:20:37.147: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726038435, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726038435, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726038435, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726038435, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 25 21:20:40.175: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook May 25 21:20:44.245: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config attach --namespace=webhook-2461 to-be-attached-pod -i -c=container1' May 25 21:20:44.366: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:20:44.372: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2461" for this suite. STEP: Destroying namespace "webhook-2461-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:10.021 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":278,"completed":31,"skipped":618,"failed":0} S ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:20:44.527: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-7709 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating stateful set ss in namespace statefulset-7709 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-7709 May 25 21:20:44.647: INFO: Found 0 stateful pods, waiting for 1 May 25 21:20:54.655: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod May 25 21:20:54.659: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7709 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 25 21:20:57.589: INFO: stderr: "I0525 21:20:57.443854 400 log.go:172] (0xc0008d8c60) (0xc0006cde00) Create stream\nI0525 21:20:57.443878 400 log.go:172] (0xc0008d8c60) (0xc0006cde00) Stream added, broadcasting: 1\nI0525 21:20:57.445678 400 log.go:172] (0xc0008d8c60) Reply frame received for 1\nI0525 21:20:57.445705 400 log.go:172] (0xc0008d8c60) (0xc00061a5a0) Create stream\nI0525 21:20:57.445713 400 log.go:172] (0xc0008d8c60) (0xc00061a5a0) Stream added, broadcasting: 3\nI0525 21:20:57.446525 400 log.go:172] (0xc0008d8c60) Reply frame received for 3\nI0525 21:20:57.446541 400 log.go:172] (0xc0008d8c60) (0xc0006cdea0) Create stream\nI0525 21:20:57.446547 400 log.go:172] (0xc0008d8c60) (0xc0006cdea0) Stream added, broadcasting: 5\nI0525 21:20:57.447300 400 log.go:172] (0xc0008d8c60) Reply frame received for 5\nI0525 21:20:57.550317 400 log.go:172] (0xc0008d8c60) Data frame received for 5\nI0525 21:20:57.550341 400 log.go:172] (0xc0006cdea0) (5) Data frame handling\nI0525 21:20:57.550357 400 log.go:172] (0xc0006cdea0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0525 21:20:57.582356 400 log.go:172] (0xc0008d8c60) Data frame received for 3\nI0525 21:20:57.582377 400 log.go:172] (0xc00061a5a0) (3) Data frame handling\nI0525 21:20:57.582394 400 log.go:172] (0xc00061a5a0) (3) Data frame sent\nI0525 21:20:57.582403 400 log.go:172] (0xc0008d8c60) Data frame received for 3\nI0525 21:20:57.582411 400 log.go:172] (0xc00061a5a0) (3) Data frame handling\nI0525 21:20:57.582549 400 log.go:172] (0xc0008d8c60) Data frame received for 5\nI0525 21:20:57.582564 400 log.go:172] (0xc0006cdea0) (5) Data frame handling\nI0525 21:20:57.584159 400 log.go:172] (0xc0008d8c60) Data frame received for 1\nI0525 21:20:57.584172 400 log.go:172] (0xc0006cde00) (1) Data frame handling\nI0525 21:20:57.584178 400 log.go:172] (0xc0006cde00) (1) Data frame sent\nI0525 21:20:57.584187 400 log.go:172] (0xc0008d8c60) (0xc0006cde00) Stream removed, broadcasting: 1\nI0525 21:20:57.584223 400 log.go:172] (0xc0008d8c60) Go away received\nI0525 21:20:57.584410 400 log.go:172] (0xc0008d8c60) (0xc0006cde00) Stream removed, broadcasting: 1\nI0525 21:20:57.584427 400 log.go:172] (0xc0008d8c60) (0xc00061a5a0) Stream removed, broadcasting: 3\nI0525 21:20:57.584434 400 log.go:172] (0xc0008d8c60) (0xc0006cdea0) Stream removed, broadcasting: 5\n" May 25 21:20:57.589: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 25 21:20:57.589: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 25 21:20:57.592: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 25 21:21:07.596: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 25 21:21:07.596: INFO: Waiting for statefulset status.replicas updated to 0 May 25 21:21:07.619: INFO: POD NODE PHASE GRACE CONDITIONS May 25 21:21:07.619: INFO: ss-0 jerma-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-25 21:20:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-25 21:20:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-25 21:20:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-25 21:20:44 +0000 UTC }] May 25 21:21:07.619: INFO: May 25 21:21:07.619: INFO: StatefulSet ss has not reached scale 3, at 1 May 25 21:21:08.624: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.984660033s May 25 21:21:09.629: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.979464416s May 25 21:21:10.634: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.974393226s May 25 21:21:11.648: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.969502696s May 25 21:21:12.654: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.95611716s May 25 21:21:13.659: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.949962086s May 25 21:21:14.665: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.944264669s May 25 21:21:15.670: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.938778282s May 25 21:21:16.697: INFO: Verifying statefulset ss doesn't scale past 3 for another 933.749751ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-7709 May 25 21:21:17.702: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7709 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 25 21:21:17.902: INFO: stderr: "I0525 21:21:17.825983 432 log.go:172] (0xc000b3e000) (0xc000792000) Create stream\nI0525 21:21:17.826050 432 log.go:172] (0xc000b3e000) (0xc000792000) Stream added, broadcasting: 1\nI0525 21:21:17.828881 432 log.go:172] (0xc000b3e000) Reply frame received for 1\nI0525 21:21:17.829008 432 log.go:172] (0xc000b3e000) (0xc0007920a0) Create stream\nI0525 21:21:17.829027 432 log.go:172] (0xc000b3e000) (0xc0007920a0) Stream added, broadcasting: 3\nI0525 21:21:17.830148 432 log.go:172] (0xc000b3e000) Reply frame received for 3\nI0525 21:21:17.830183 432 log.go:172] (0xc000b3e000) (0xc000792140) Create stream\nI0525 21:21:17.830194 432 log.go:172] (0xc000b3e000) (0xc000792140) Stream added, broadcasting: 5\nI0525 21:21:17.831063 432 log.go:172] (0xc000b3e000) Reply frame received for 5\nI0525 21:21:17.894306 432 log.go:172] (0xc000b3e000) Data frame received for 3\nI0525 21:21:17.894346 432 log.go:172] (0xc0007920a0) (3) Data frame handling\nI0525 21:21:17.894413 432 log.go:172] (0xc000b3e000) Data frame received for 5\nI0525 21:21:17.894450 432 log.go:172] (0xc000792140) (5) Data frame handling\nI0525 21:21:17.894465 432 log.go:172] (0xc000792140) (5) Data frame sent\nI0525 21:21:17.894477 432 log.go:172] (0xc000b3e000) Data frame received for 5\nI0525 21:21:17.894488 432 log.go:172] (0xc000792140) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0525 21:21:17.894520 432 log.go:172] (0xc0007920a0) (3) Data frame sent\nI0525 21:21:17.894542 432 log.go:172] (0xc000b3e000) Data frame received for 3\nI0525 21:21:17.894553 432 log.go:172] (0xc0007920a0) (3) Data frame handling\nI0525 21:21:17.896139 432 log.go:172] (0xc000b3e000) Data frame received for 1\nI0525 21:21:17.896158 432 log.go:172] (0xc000792000) (1) Data frame handling\nI0525 21:21:17.896172 432 log.go:172] (0xc000792000) (1) Data frame sent\nI0525 21:21:17.896209 432 log.go:172] (0xc000b3e000) (0xc000792000) Stream removed, broadcasting: 1\nI0525 21:21:17.896246 432 log.go:172] (0xc000b3e000) Go away received\nI0525 21:21:17.896615 432 log.go:172] (0xc000b3e000) (0xc000792000) Stream removed, broadcasting: 1\nI0525 21:21:17.896636 432 log.go:172] (0xc000b3e000) (0xc0007920a0) Stream removed, broadcasting: 3\nI0525 21:21:17.896648 432 log.go:172] (0xc000b3e000) (0xc000792140) Stream removed, broadcasting: 5\n" May 25 21:21:17.902: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 25 21:21:17.902: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 25 21:21:17.902: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7709 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 25 21:21:18.124: INFO: stderr: "I0525 21:21:18.028192 452 log.go:172] (0xc0001198c0) (0xc000a8c6e0) Create stream\nI0525 21:21:18.028240 452 log.go:172] (0xc0001198c0) (0xc000a8c6e0) Stream added, broadcasting: 1\nI0525 21:21:18.033824 452 log.go:172] (0xc0001198c0) Reply frame received for 1\nI0525 21:21:18.033858 452 log.go:172] (0xc0001198c0) (0xc00071bae0) Create stream\nI0525 21:21:18.033868 452 log.go:172] (0xc0001198c0) (0xc00071bae0) Stream added, broadcasting: 3\nI0525 21:21:18.034849 452 log.go:172] (0xc0001198c0) Reply frame received for 3\nI0525 21:21:18.034894 452 log.go:172] (0xc0001198c0) (0xc0006b66e0) Create stream\nI0525 21:21:18.034922 452 log.go:172] (0xc0001198c0) (0xc0006b66e0) Stream added, broadcasting: 5\nI0525 21:21:18.035956 452 log.go:172] (0xc0001198c0) Reply frame received for 5\nI0525 21:21:18.084576 452 log.go:172] (0xc0001198c0) Data frame received for 5\nI0525 21:21:18.084596 452 log.go:172] (0xc0006b66e0) (5) Data frame handling\nI0525 21:21:18.084609 452 log.go:172] (0xc0006b66e0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0525 21:21:18.113997 452 log.go:172] (0xc0001198c0) Data frame received for 5\nI0525 21:21:18.114123 452 log.go:172] (0xc0006b66e0) (5) Data frame handling\nI0525 21:21:18.114163 452 log.go:172] (0xc0006b66e0) (5) Data frame sent\nmv: can't rename '/tmp/index.html': No such file or directory\nI0525 21:21:18.114335 452 log.go:172] (0xc0001198c0) Data frame received for 5\nI0525 21:21:18.114382 452 log.go:172] (0xc0006b66e0) (5) Data frame handling\nI0525 21:21:18.114401 452 log.go:172] (0xc0006b66e0) (5) Data frame sent\n+ true\nI0525 21:21:18.114503 452 log.go:172] (0xc0001198c0) Data frame received for 3\nI0525 21:21:18.114542 452 log.go:172] (0xc00071bae0) (3) Data frame handling\nI0525 21:21:18.114561 452 log.go:172] (0xc00071bae0) (3) Data frame sent\nI0525 21:21:18.114672 452 log.go:172] (0xc0001198c0) Data frame received for 3\nI0525 21:21:18.114701 452 log.go:172] (0xc00071bae0) (3) Data frame handling\nI0525 21:21:18.114736 452 log.go:172] (0xc0001198c0) Data frame received for 5\nI0525 21:21:18.114763 452 log.go:172] (0xc0006b66e0) (5) Data frame handling\nI0525 21:21:18.117604 452 log.go:172] (0xc0001198c0) Data frame received for 1\nI0525 21:21:18.117640 452 log.go:172] (0xc000a8c6e0) (1) Data frame handling\nI0525 21:21:18.117669 452 log.go:172] (0xc000a8c6e0) (1) Data frame sent\nI0525 21:21:18.117704 452 log.go:172] (0xc0001198c0) (0xc000a8c6e0) Stream removed, broadcasting: 1\nI0525 21:21:18.117821 452 log.go:172] (0xc0001198c0) Go away received\nI0525 21:21:18.118245 452 log.go:172] (0xc0001198c0) (0xc000a8c6e0) Stream removed, broadcasting: 1\nI0525 21:21:18.118268 452 log.go:172] (0xc0001198c0) (0xc00071bae0) Stream removed, broadcasting: 3\nI0525 21:21:18.118280 452 log.go:172] (0xc0001198c0) (0xc0006b66e0) Stream removed, broadcasting: 5\n" May 25 21:21:18.124: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 25 21:21:18.124: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 25 21:21:18.124: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7709 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 25 21:21:18.337: INFO: stderr: "I0525 21:21:18.257910 474 log.go:172] (0xc0000f53f0) (0xc000733b80) Create stream\nI0525 21:21:18.257964 474 log.go:172] (0xc0000f53f0) (0xc000733b80) Stream added, broadcasting: 1\nI0525 21:21:18.265745 474 log.go:172] (0xc0000f53f0) Reply frame received for 1\nI0525 21:21:18.265796 474 log.go:172] (0xc0000f53f0) (0xc0008f2000) Create stream\nI0525 21:21:18.265808 474 log.go:172] (0xc0000f53f0) (0xc0008f2000) Stream added, broadcasting: 3\nI0525 21:21:18.269511 474 log.go:172] (0xc0000f53f0) Reply frame received for 3\nI0525 21:21:18.269552 474 log.go:172] (0xc0000f53f0) (0xc0008f20a0) Create stream\nI0525 21:21:18.269571 474 log.go:172] (0xc0000f53f0) (0xc0008f20a0) Stream added, broadcasting: 5\nI0525 21:21:18.274250 474 log.go:172] (0xc0000f53f0) Reply frame received for 5\nI0525 21:21:18.328207 474 log.go:172] (0xc0000f53f0) Data frame received for 5\nI0525 21:21:18.328233 474 log.go:172] (0xc0008f20a0) (5) Data frame handling\nI0525 21:21:18.328240 474 log.go:172] (0xc0008f20a0) (5) Data frame sent\nI0525 21:21:18.328246 474 log.go:172] (0xc0000f53f0) Data frame received for 5\nI0525 21:21:18.328250 474 log.go:172] (0xc0008f20a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0525 21:21:18.328277 474 log.go:172] (0xc0000f53f0) Data frame received for 3\nI0525 21:21:18.328308 474 log.go:172] (0xc0008f2000) (3) Data frame handling\nI0525 21:21:18.328336 474 log.go:172] (0xc0008f2000) (3) Data frame sent\nI0525 21:21:18.328359 474 log.go:172] (0xc0000f53f0) Data frame received for 3\nI0525 21:21:18.328386 474 log.go:172] (0xc0008f2000) (3) Data frame handling\nI0525 21:21:18.329930 474 log.go:172] (0xc0000f53f0) Data frame received for 1\nI0525 21:21:18.329964 474 log.go:172] (0xc000733b80) (1) Data frame handling\nI0525 21:21:18.329986 474 log.go:172] (0xc000733b80) (1) Data frame sent\nI0525 21:21:18.330065 474 log.go:172] (0xc0000f53f0) (0xc000733b80) Stream removed, broadcasting: 1\nI0525 21:21:18.330107 474 log.go:172] (0xc0000f53f0) Go away received\nI0525 21:21:18.330614 474 log.go:172] (0xc0000f53f0) (0xc000733b80) Stream removed, broadcasting: 1\nI0525 21:21:18.330634 474 log.go:172] (0xc0000f53f0) (0xc0008f2000) Stream removed, broadcasting: 3\nI0525 21:21:18.330644 474 log.go:172] (0xc0000f53f0) (0xc0008f20a0) Stream removed, broadcasting: 5\n" May 25 21:21:18.337: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 25 21:21:18.337: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 25 21:21:18.340: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false May 25 21:21:28.346: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 25 21:21:28.346: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 25 21:21:28.346: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod May 25 21:21:28.350: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7709 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 25 21:21:28.557: INFO: stderr: "I0525 21:21:28.483931 495 log.go:172] (0xc000540a50) (0xc00051a000) Create stream\nI0525 21:21:28.484028 495 log.go:172] (0xc000540a50) (0xc00051a000) Stream added, broadcasting: 1\nI0525 21:21:28.486902 495 log.go:172] (0xc000540a50) Reply frame received for 1\nI0525 21:21:28.487019 495 log.go:172] (0xc000540a50) (0xc000667ae0) Create stream\nI0525 21:21:28.487031 495 log.go:172] (0xc000540a50) (0xc000667ae0) Stream added, broadcasting: 3\nI0525 21:21:28.487909 495 log.go:172] (0xc000540a50) Reply frame received for 3\nI0525 21:21:28.487955 495 log.go:172] (0xc000540a50) (0xc00051a140) Create stream\nI0525 21:21:28.487981 495 log.go:172] (0xc000540a50) (0xc00051a140) Stream added, broadcasting: 5\nI0525 21:21:28.488908 495 log.go:172] (0xc000540a50) Reply frame received for 5\nI0525 21:21:28.550245 495 log.go:172] (0xc000540a50) Data frame received for 3\nI0525 21:21:28.550273 495 log.go:172] (0xc000667ae0) (3) Data frame handling\nI0525 21:21:28.550292 495 log.go:172] (0xc000667ae0) (3) Data frame sent\nI0525 21:21:28.550558 495 log.go:172] (0xc000540a50) Data frame received for 3\nI0525 21:21:28.550580 495 log.go:172] (0xc000667ae0) (3) Data frame handling\nI0525 21:21:28.550599 495 log.go:172] (0xc000540a50) Data frame received for 5\nI0525 21:21:28.550608 495 log.go:172] (0xc00051a140) (5) Data frame handling\nI0525 21:21:28.550619 495 log.go:172] (0xc00051a140) (5) Data frame sent\nI0525 21:21:28.550635 495 log.go:172] (0xc000540a50) Data frame received for 5\nI0525 21:21:28.550644 495 log.go:172] (0xc00051a140) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0525 21:21:28.552162 495 log.go:172] (0xc000540a50) Data frame received for 1\nI0525 21:21:28.552177 495 log.go:172] (0xc00051a000) (1) Data frame handling\nI0525 21:21:28.552194 495 log.go:172] (0xc00051a000) (1) Data frame sent\nI0525 21:21:28.552301 495 log.go:172] (0xc000540a50) (0xc00051a000) Stream removed, broadcasting: 1\nI0525 21:21:28.552615 495 log.go:172] (0xc000540a50) Go away received\nI0525 21:21:28.552782 495 log.go:172] (0xc000540a50) (0xc00051a000) Stream removed, broadcasting: 1\nI0525 21:21:28.552806 495 log.go:172] (0xc000540a50) (0xc000667ae0) Stream removed, broadcasting: 3\nI0525 21:21:28.552818 495 log.go:172] (0xc000540a50) (0xc00051a140) Stream removed, broadcasting: 5\n" May 25 21:21:28.558: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 25 21:21:28.558: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 25 21:21:28.558: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7709 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 25 21:21:28.829: INFO: stderr: "I0525 21:21:28.698916 518 log.go:172] (0xc0009f2840) (0xc000a1c000) Create stream\nI0525 21:21:28.698983 518 log.go:172] (0xc0009f2840) (0xc000a1c000) Stream added, broadcasting: 1\nI0525 21:21:28.701618 518 log.go:172] (0xc0009f2840) Reply frame received for 1\nI0525 21:21:28.701642 518 log.go:172] (0xc0009f2840) (0xc000a1c0a0) Create stream\nI0525 21:21:28.701648 518 log.go:172] (0xc0009f2840) (0xc000a1c0a0) Stream added, broadcasting: 3\nI0525 21:21:28.702891 518 log.go:172] (0xc0009f2840) Reply frame received for 3\nI0525 21:21:28.702914 518 log.go:172] (0xc0009f2840) (0xc00065fa40) Create stream\nI0525 21:21:28.702921 518 log.go:172] (0xc0009f2840) (0xc00065fa40) Stream added, broadcasting: 5\nI0525 21:21:28.703946 518 log.go:172] (0xc0009f2840) Reply frame received for 5\nI0525 21:21:28.776154 518 log.go:172] (0xc0009f2840) Data frame received for 5\nI0525 21:21:28.776184 518 log.go:172] (0xc00065fa40) (5) Data frame handling\nI0525 21:21:28.776204 518 log.go:172] (0xc00065fa40) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0525 21:21:28.821891 518 log.go:172] (0xc0009f2840) Data frame received for 5\nI0525 21:21:28.822030 518 log.go:172] (0xc00065fa40) (5) Data frame handling\nI0525 21:21:28.822072 518 log.go:172] (0xc0009f2840) Data frame received for 3\nI0525 21:21:28.822092 518 log.go:172] (0xc000a1c0a0) (3) Data frame handling\nI0525 21:21:28.822213 518 log.go:172] (0xc000a1c0a0) (3) Data frame sent\nI0525 21:21:28.822236 518 log.go:172] (0xc0009f2840) Data frame received for 3\nI0525 21:21:28.822253 518 log.go:172] (0xc000a1c0a0) (3) Data frame handling\nI0525 21:21:28.823905 518 log.go:172] (0xc0009f2840) Data frame received for 1\nI0525 21:21:28.823918 518 log.go:172] (0xc000a1c000) (1) Data frame handling\nI0525 21:21:28.823925 518 log.go:172] (0xc000a1c000) (1) Data frame sent\nI0525 21:21:28.823933 518 log.go:172] (0xc0009f2840) (0xc000a1c000) Stream removed, broadcasting: 1\nI0525 21:21:28.824085 518 log.go:172] (0xc0009f2840) Go away received\nI0525 21:21:28.824186 518 log.go:172] (0xc0009f2840) (0xc000a1c000) Stream removed, broadcasting: 1\nI0525 21:21:28.824214 518 log.go:172] (0xc0009f2840) (0xc000a1c0a0) Stream removed, broadcasting: 3\nI0525 21:21:28.824232 518 log.go:172] (0xc0009f2840) (0xc00065fa40) Stream removed, broadcasting: 5\n" May 25 21:21:28.829: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 25 21:21:28.829: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 25 21:21:28.829: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7709 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 25 21:21:29.056: INFO: stderr: "I0525 21:21:28.956824 539 log.go:172] (0xc000bcafd0) (0xc000940460) Create stream\nI0525 21:21:28.956904 539 log.go:172] (0xc000bcafd0) (0xc000940460) Stream added, broadcasting: 1\nI0525 21:21:28.962089 539 log.go:172] (0xc000bcafd0) Reply frame received for 1\nI0525 21:21:28.962138 539 log.go:172] (0xc000bcafd0) (0xc0005c6640) Create stream\nI0525 21:21:28.962158 539 log.go:172] (0xc000bcafd0) (0xc0005c6640) Stream added, broadcasting: 3\nI0525 21:21:28.962991 539 log.go:172] (0xc000bcafd0) Reply frame received for 3\nI0525 21:21:28.963034 539 log.go:172] (0xc000bcafd0) (0xc00037d400) Create stream\nI0525 21:21:28.963053 539 log.go:172] (0xc000bcafd0) (0xc00037d400) Stream added, broadcasting: 5\nI0525 21:21:28.963976 539 log.go:172] (0xc000bcafd0) Reply frame received for 5\nI0525 21:21:29.019777 539 log.go:172] (0xc000bcafd0) Data frame received for 5\nI0525 21:21:29.019804 539 log.go:172] (0xc00037d400) (5) Data frame handling\nI0525 21:21:29.019822 539 log.go:172] (0xc00037d400) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0525 21:21:29.046863 539 log.go:172] (0xc000bcafd0) Data frame received for 3\nI0525 21:21:29.046881 539 log.go:172] (0xc0005c6640) (3) Data frame handling\nI0525 21:21:29.046891 539 log.go:172] (0xc0005c6640) (3) Data frame sent\nI0525 21:21:29.047410 539 log.go:172] (0xc000bcafd0) Data frame received for 3\nI0525 21:21:29.047422 539 log.go:172] (0xc0005c6640) (3) Data frame handling\nI0525 21:21:29.047455 539 log.go:172] (0xc000bcafd0) Data frame received for 5\nI0525 21:21:29.047494 539 log.go:172] (0xc00037d400) (5) Data frame handling\nI0525 21:21:29.049467 539 log.go:172] (0xc000bcafd0) Data frame received for 1\nI0525 21:21:29.049507 539 log.go:172] (0xc000940460) (1) Data frame handling\nI0525 21:21:29.049526 539 log.go:172] (0xc000940460) (1) Data frame sent\nI0525 21:21:29.049543 539 log.go:172] (0xc000bcafd0) (0xc000940460) Stream removed, broadcasting: 1\nI0525 21:21:29.049565 539 log.go:172] (0xc000bcafd0) Go away received\nI0525 21:21:29.049987 539 log.go:172] (0xc000bcafd0) (0xc000940460) Stream removed, broadcasting: 1\nI0525 21:21:29.050012 539 log.go:172] (0xc000bcafd0) (0xc0005c6640) Stream removed, broadcasting: 3\nI0525 21:21:29.050025 539 log.go:172] (0xc000bcafd0) (0xc00037d400) Stream removed, broadcasting: 5\n" May 25 21:21:29.056: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 25 21:21:29.056: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 25 21:21:29.056: INFO: Waiting for statefulset status.replicas updated to 0 May 25 21:21:29.059: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 May 25 21:21:39.068: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 25 21:21:39.068: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 25 21:21:39.068: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 25 21:21:39.117: INFO: POD NODE PHASE GRACE CONDITIONS May 25 21:21:39.117: INFO: ss-0 jerma-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-25 21:20:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-25 21:21:28 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-25 21:21:28 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-25 21:20:44 +0000 UTC }] May 25 21:21:39.117: INFO: ss-1 jerma-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-25 21:21:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-25 21:21:29 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-25 21:21:29 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-25 21:21:07 +0000 UTC }] May 25 21:21:39.117: INFO: ss-2 jerma-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-25 21:21:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-25 21:21:29 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-25 21:21:29 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-25 21:21:07 +0000 UTC }] May 25 21:21:39.117: INFO: May 25 21:21:39.117: INFO: StatefulSet ss has not reached scale 0, at 3 May 25 21:21:40.147: INFO: POD NODE PHASE GRACE CONDITIONS May 25 21:21:40.147: INFO: ss-0 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-25 21:20:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-25 21:21:28 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-25 21:21:28 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-25 21:20:44 +0000 UTC }] May 25 21:21:40.148: INFO: ss-1 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-25 21:21:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-25 21:21:29 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-25 21:21:29 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-25 21:21:07 +0000 UTC }] May 25 21:21:40.148: INFO: ss-2 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-25 21:21:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-25 21:21:29 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-25 21:21:29 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-25 21:21:07 +0000 UTC }] May 25 21:21:40.148: INFO: May 25 21:21:40.148: INFO: StatefulSet ss has not reached scale 0, at 3 May 25 21:21:41.151: INFO: POD NODE PHASE GRACE CONDITIONS May 25 21:21:41.151: INFO: ss-0 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-25 21:20:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-25 21:21:28 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-25 21:21:28 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-25 21:20:44 +0000 UTC }] May 25 21:21:41.151: INFO: ss-1 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-25 21:21:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-25 21:21:29 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-25 21:21:29 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-25 21:21:07 +0000 UTC }] May 25 21:21:41.151: INFO: ss-2 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-25 21:21:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-25 21:21:29 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-25 21:21:29 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-25 21:21:07 +0000 UTC }] May 25 21:21:41.151: INFO: May 25 21:21:41.151: INFO: StatefulSet ss has not reached scale 0, at 3 May 25 21:21:42.157: INFO: POD NODE PHASE GRACE CONDITIONS May 25 21:21:42.157: INFO: ss-0 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-25 21:20:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-25 21:21:28 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-25 21:21:28 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-25 21:20:44 +0000 UTC }] May 25 21:21:42.157: INFO: ss-1 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-25 21:21:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-25 21:21:29 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-25 21:21:29 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-25 21:21:07 +0000 UTC }] May 25 21:21:42.157: INFO: ss-2 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-25 21:21:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-25 21:21:29 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-25 21:21:29 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-25 21:21:07 +0000 UTC }] May 25 21:21:42.157: INFO: May 25 21:21:42.157: INFO: StatefulSet ss has not reached scale 0, at 3 May 25 21:21:43.162: INFO: POD NODE PHASE GRACE CONDITIONS May 25 21:21:43.162: INFO: ss-0 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-25 21:20:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-25 21:21:28 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-25 21:21:28 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-25 21:20:44 +0000 UTC }] May 25 21:21:43.162: INFO: ss-1 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-25 21:21:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-25 21:21:29 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-25 21:21:29 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-25 21:21:07 +0000 UTC }] May 25 21:21:43.162: INFO: ss-2 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-25 21:21:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-25 21:21:29 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-25 21:21:29 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-25 21:21:07 +0000 UTC }] May 25 21:21:43.162: INFO: May 25 21:21:43.162: INFO: StatefulSet ss has not reached scale 0, at 3 May 25 21:21:44.168: INFO: POD NODE PHASE GRACE CONDITIONS May 25 21:21:44.168: INFO: ss-0 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-25 21:20:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-25 21:21:28 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-25 21:21:28 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-25 21:20:44 +0000 UTC }] May 25 21:21:44.168: INFO: ss-1 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-25 21:21:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-25 21:21:29 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-25 21:21:29 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-25 21:21:07 +0000 UTC }] May 25 21:21:44.168: INFO: ss-2 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-25 21:21:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-25 21:21:29 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-25 21:21:29 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-25 21:21:07 +0000 UTC }] May 25 21:21:44.168: INFO: May 25 21:21:44.168: INFO: StatefulSet ss has not reached scale 0, at 3 May 25 21:21:45.177: INFO: POD NODE PHASE GRACE CONDITIONS May 25 21:21:45.178: INFO: ss-0 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-25 21:20:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-25 21:21:28 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-25 21:21:28 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-25 21:20:44 +0000 UTC }] May 25 21:21:45.178: INFO: ss-1 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-25 21:21:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-25 21:21:29 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-25 21:21:29 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-25 21:21:07 +0000 UTC }] May 25 21:21:45.178: INFO: ss-2 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-25 21:21:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-25 21:21:29 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-25 21:21:29 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-25 21:21:07 +0000 UTC }] May 25 21:21:45.178: INFO: May 25 21:21:45.178: INFO: StatefulSet ss has not reached scale 0, at 3 May 25 21:21:46.183: INFO: POD NODE PHASE GRACE CONDITIONS May 25 21:21:46.183: INFO: ss-0 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-25 21:20:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-25 21:21:28 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-25 21:21:28 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-25 21:20:44 +0000 UTC }] May 25 21:21:46.183: INFO: ss-1 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-25 21:21:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-25 21:21:29 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-25 21:21:29 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-25 21:21:07 +0000 UTC }] May 25 21:21:46.183: INFO: ss-2 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-25 21:21:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-25 21:21:29 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-25 21:21:29 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-25 21:21:07 +0000 UTC }] May 25 21:21:46.183: INFO: May 25 21:21:46.183: INFO: StatefulSet ss has not reached scale 0, at 3 May 25 21:21:47.188: INFO: POD NODE PHASE GRACE CONDITIONS May 25 21:21:47.188: INFO: ss-0 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-25 21:20:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-25 21:21:28 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-25 21:21:28 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-25 21:20:44 +0000 UTC }] May 25 21:21:47.188: INFO: ss-1 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-25 21:21:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-25 21:21:29 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-25 21:21:29 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-25 21:21:07 +0000 UTC }] May 25 21:21:47.188: INFO: ss-2 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-25 21:21:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-25 21:21:29 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-25 21:21:29 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-25 21:21:07 +0000 UTC }] May 25 21:21:47.188: INFO: May 25 21:21:47.188: INFO: StatefulSet ss has not reached scale 0, at 3 May 25 21:21:48.193: INFO: POD NODE PHASE GRACE CONDITIONS May 25 21:21:48.193: INFO: ss-0 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-25 21:20:44 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-25 21:21:28 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-25 21:21:28 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-25 21:20:44 +0000 UTC }] May 25 21:21:48.193: INFO: ss-1 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-25 21:21:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-25 21:21:29 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-25 21:21:29 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-25 21:21:07 +0000 UTC }] May 25 21:21:48.193: INFO: ss-2 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-25 21:21:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-25 21:21:29 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-25 21:21:29 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-25 21:21:07 +0000 UTC }] May 25 21:21:48.193: INFO: May 25 21:21:48.193: INFO: StatefulSet ss has not reached scale 0, at 3 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-7709 May 25 21:21:49.199: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7709 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 25 21:21:49.344: INFO: rc: 1 May 25 21:21:49.344: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7709 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: error: unable to upgrade connection: container not found ("webserver") error: exit status 1 May 25 21:21:59.345: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7709 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 25 21:21:59.448: INFO: rc: 1 May 25 21:21:59.448: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7709 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 25 21:22:09.449: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7709 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 25 21:22:09.546: INFO: rc: 1 May 25 21:22:09.546: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7709 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 25 21:22:19.547: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7709 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 25 21:22:19.645: INFO: rc: 1 May 25 21:22:19.645: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7709 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 25 21:22:29.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7709 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 25 21:22:29.756: INFO: rc: 1 May 25 21:22:29.756: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7709 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 25 21:22:39.757: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7709 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 25 21:22:39.855: INFO: rc: 1 May 25 21:22:39.855: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7709 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 25 21:22:49.855: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7709 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 25 21:22:49.945: INFO: rc: 1 May 25 21:22:49.945: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7709 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 25 21:22:59.945: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7709 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 25 21:23:00.040: INFO: rc: 1 May 25 21:23:00.040: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7709 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 25 21:23:10.040: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7709 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 25 21:23:10.141: INFO: rc: 1 May 25 21:23:10.142: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7709 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 25 21:23:20.142: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7709 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 25 21:23:20.245: INFO: rc: 1 May 25 21:23:20.245: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7709 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 25 21:23:30.246: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7709 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 25 21:23:30.356: INFO: rc: 1 May 25 21:23:30.356: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7709 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 25 21:23:40.356: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7709 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 25 21:23:40.459: INFO: rc: 1 May 25 21:23:40.459: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7709 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 25 21:23:50.460: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7709 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 25 21:23:50.565: INFO: rc: 1 May 25 21:23:50.565: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7709 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 25 21:24:00.565: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7709 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 25 21:24:00.659: INFO: rc: 1 May 25 21:24:00.659: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7709 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 25 21:24:10.659: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7709 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 25 21:24:10.759: INFO: rc: 1 May 25 21:24:10.759: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7709 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 25 21:24:20.760: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7709 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 25 21:24:20.864: INFO: rc: 1 May 25 21:24:20.865: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7709 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 25 21:24:30.865: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7709 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 25 21:24:30.960: INFO: rc: 1 May 25 21:24:30.961: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7709 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 25 21:24:40.961: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7709 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 25 21:24:41.068: INFO: rc: 1 May 25 21:24:41.068: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7709 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 25 21:24:51.069: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7709 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 25 21:24:51.178: INFO: rc: 1 May 25 21:24:51.179: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7709 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 25 21:25:01.179: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7709 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 25 21:25:01.280: INFO: rc: 1 May 25 21:25:01.280: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7709 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 25 21:25:11.281: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7709 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 25 21:25:11.370: INFO: rc: 1 May 25 21:25:11.370: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7709 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 25 21:25:21.371: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7709 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 25 21:25:21.464: INFO: rc: 1 May 25 21:25:21.464: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7709 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 25 21:25:31.464: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7709 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 25 21:25:31.564: INFO: rc: 1 May 25 21:25:31.564: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7709 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 25 21:25:41.565: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7709 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 25 21:25:41.660: INFO: rc: 1 May 25 21:25:41.660: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7709 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 25 21:25:51.660: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7709 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 25 21:25:51.756: INFO: rc: 1 May 25 21:25:51.756: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7709 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 25 21:26:01.756: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7709 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 25 21:26:01.856: INFO: rc: 1 May 25 21:26:01.856: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7709 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 25 21:26:11.856: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7709 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 25 21:26:11.959: INFO: rc: 1 May 25 21:26:11.959: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7709 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 25 21:26:21.960: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7709 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 25 21:26:22.085: INFO: rc: 1 May 25 21:26:22.085: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7709 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 25 21:26:32.085: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7709 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 25 21:26:32.184: INFO: rc: 1 May 25 21:26:32.184: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7709 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 25 21:26:42.185: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7709 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 25 21:26:42.290: INFO: rc: 1 May 25 21:26:42.290: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7709 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 25 21:26:52.290: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7709 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 25 21:26:52.381: INFO: rc: 1 May 25 21:26:52.382: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: May 25 21:26:52.382: INFO: Scaling statefulset ss to 0 May 25 21:26:52.399: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 May 25 21:26:52.401: INFO: Deleting all statefulset in ns statefulset-7709 May 25 21:26:52.403: INFO: Scaling statefulset ss to 0 May 25 21:26:52.411: INFO: Waiting for statefulset status.replicas updated to 0 May 25 21:26:52.414: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:26:52.426: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-7709" for this suite. • [SLOW TEST:367.906 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":278,"completed":32,"skipped":619,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:26:52.434: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 25 21:26:53.399: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 25 21:26:55.410: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726038813, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726038813, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726038813, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726038813, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 25 21:26:57.414: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726038813, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726038813, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726038813, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726038813, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 25 21:27:00.487: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 25 21:27:00.492: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-4769-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:27:01.668: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6610" for this suite. STEP: Destroying namespace "webhook-6610-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.361 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":278,"completed":33,"skipped":633,"failed":0} SS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:27:01.795: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object May 25 21:27:01.922: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-2797 /api/v1/namespaces/watch-2797/configmaps/e2e-watch-test-label-changed 1a13fb74-54d4-4210-86dc-c32ac3995f51 19114522 0 2020-05-25 21:27:01 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} May 25 21:27:01.922: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-2797 /api/v1/namespaces/watch-2797/configmaps/e2e-watch-test-label-changed 1a13fb74-54d4-4210-86dc-c32ac3995f51 19114523 0 2020-05-25 21:27:01 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} May 25 21:27:01.922: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-2797 /api/v1/namespaces/watch-2797/configmaps/e2e-watch-test-label-changed 1a13fb74-54d4-4210-86dc-c32ac3995f51 19114524 0 2020-05-25 21:27:01 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored May 25 21:27:11.995: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-2797 /api/v1/namespaces/watch-2797/configmaps/e2e-watch-test-label-changed 1a13fb74-54d4-4210-86dc-c32ac3995f51 19114570 0 2020-05-25 21:27:01 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 25 21:27:11.995: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-2797 /api/v1/namespaces/watch-2797/configmaps/e2e-watch-test-label-changed 1a13fb74-54d4-4210-86dc-c32ac3995f51 19114571 0 2020-05-25 21:27:01 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} May 25 21:27:11.995: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-2797 /api/v1/namespaces/watch-2797/configmaps/e2e-watch-test-label-changed 1a13fb74-54d4-4210-86dc-c32ac3995f51 19114572 0 2020-05-25 21:27:01 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:27:11.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-2797" for this suite. • [SLOW TEST:10.209 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":278,"completed":34,"skipped":635,"failed":0} S ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:27:12.004: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:172 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating server pod server in namespace prestop-7258 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-7258 STEP: Deleting pre-stop pod May 25 21:27:25.121: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:27:25.127: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-7258" for this suite. • [SLOW TEST:13.134 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":278,"completed":35,"skipped":636,"failed":0} SSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:27:25.138: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 25 21:27:25.218: INFO: Waiting up to 5m0s for pod "downwardapi-volume-374f02c9-45ba-4250-b0a2-1e51abe3b7c9" in namespace "projected-7589" to be "success or failure" May 25 21:27:25.234: INFO: Pod "downwardapi-volume-374f02c9-45ba-4250-b0a2-1e51abe3b7c9": Phase="Pending", Reason="", readiness=false. Elapsed: 16.794173ms May 25 21:27:27.296: INFO: Pod "downwardapi-volume-374f02c9-45ba-4250-b0a2-1e51abe3b7c9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.07787841s May 25 21:27:29.300: INFO: Pod "downwardapi-volume-374f02c9-45ba-4250-b0a2-1e51abe3b7c9": Phase="Running", Reason="", readiness=true. Elapsed: 4.082185504s May 25 21:27:31.305: INFO: Pod "downwardapi-volume-374f02c9-45ba-4250-b0a2-1e51abe3b7c9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.087219428s STEP: Saw pod success May 25 21:27:31.305: INFO: Pod "downwardapi-volume-374f02c9-45ba-4250-b0a2-1e51abe3b7c9" satisfied condition "success or failure" May 25 21:27:31.309: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-374f02c9-45ba-4250-b0a2-1e51abe3b7c9 container client-container: STEP: delete the pod May 25 21:27:31.343: INFO: Waiting for pod downwardapi-volume-374f02c9-45ba-4250-b0a2-1e51abe3b7c9 to disappear May 25 21:27:31.347: INFO: Pod downwardapi-volume-374f02c9-45ba-4250-b0a2-1e51abe3b7c9 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:27:31.347: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7589" for this suite. • [SLOW TEST:6.215 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":36,"skipped":639,"failed":0} SSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:27:31.354: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0525 21:28:02.025245 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 25 21:28:02.025: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:28:02.025: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-82" for this suite. • [SLOW TEST:30.679 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":278,"completed":37,"skipped":644,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:28:02.033: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:28:18.274: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8512" for this suite. • [SLOW TEST:16.249 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":278,"completed":38,"skipped":658,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:28:18.283: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-c9424cae-c2b7-49f1-aa2f-1a784d2d4316 STEP: Creating a pod to test consume secrets May 25 21:28:18.502: INFO: Waiting up to 5m0s for pod "pod-secrets-1cd3d8a1-b936-4fbe-9e01-bc052c910c90" in namespace "secrets-2182" to be "success or failure" May 25 21:28:18.508: INFO: Pod "pod-secrets-1cd3d8a1-b936-4fbe-9e01-bc052c910c90": Phase="Pending", Reason="", readiness=false. Elapsed: 6.113699ms May 25 21:28:20.512: INFO: Pod "pod-secrets-1cd3d8a1-b936-4fbe-9e01-bc052c910c90": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009898718s May 25 21:28:22.516: INFO: Pod "pod-secrets-1cd3d8a1-b936-4fbe-9e01-bc052c910c90": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014182739s STEP: Saw pod success May 25 21:28:22.516: INFO: Pod "pod-secrets-1cd3d8a1-b936-4fbe-9e01-bc052c910c90" satisfied condition "success or failure" May 25 21:28:22.520: INFO: Trying to get logs from node jerma-worker pod pod-secrets-1cd3d8a1-b936-4fbe-9e01-bc052c910c90 container secret-volume-test: STEP: delete the pod May 25 21:28:22.567: INFO: Waiting for pod pod-secrets-1cd3d8a1-b936-4fbe-9e01-bc052c910c90 to disappear May 25 21:28:22.580: INFO: Pod pod-secrets-1cd3d8a1-b936-4fbe-9e01-bc052c910c90 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:28:22.581: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2182" for this suite. STEP: Destroying namespace "secret-namespace-2403" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":278,"completed":39,"skipped":691,"failed":0} SSSSSSSSS ------------------------------ [sig-network] Services should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:28:22.594: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service nodeport-test with type=NodePort in namespace services-3702 STEP: creating replication controller nodeport-test in namespace services-3702 I0525 21:28:22.792006 6 runners.go:189] Created replication controller with name: nodeport-test, namespace: services-3702, replica count: 2 I0525 21:28:25.842415 6 runners.go:189] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0525 21:28:28.842730 6 runners.go:189] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 25 21:28:28.842: INFO: Creating new exec pod May 25 21:28:33.859: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3702 execpod4sdv9 -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80' May 25 21:28:34.181: INFO: stderr: "I0525 21:28:34.006445 1180 log.go:172] (0xc000a586e0) (0xc000a38280) Create stream\nI0525 21:28:34.006524 1180 log.go:172] (0xc000a586e0) (0xc000a38280) Stream added, broadcasting: 1\nI0525 21:28:34.010032 1180 log.go:172] (0xc000a586e0) Reply frame received for 1\nI0525 21:28:34.010093 1180 log.go:172] (0xc000a586e0) (0xc000aaa460) Create stream\nI0525 21:28:34.010116 1180 log.go:172] (0xc000a586e0) (0xc000aaa460) Stream added, broadcasting: 3\nI0525 21:28:34.011075 1180 log.go:172] (0xc000a586e0) Reply frame received for 3\nI0525 21:28:34.011120 1180 log.go:172] (0xc000a586e0) (0xc000aaa500) Create stream\nI0525 21:28:34.011132 1180 log.go:172] (0xc000a586e0) (0xc000aaa500) Stream added, broadcasting: 5\nI0525 21:28:34.012144 1180 log.go:172] (0xc000a586e0) Reply frame received for 5\nI0525 21:28:34.152204 1180 log.go:172] (0xc000a586e0) Data frame received for 5\nI0525 21:28:34.152236 1180 log.go:172] (0xc000aaa500) (5) Data frame handling\nI0525 21:28:34.152256 1180 log.go:172] (0xc000aaa500) (5) Data frame sent\n+ nc -zv -t -w 2 nodeport-test 80\nI0525 21:28:34.172545 1180 log.go:172] (0xc000a586e0) Data frame received for 5\nI0525 21:28:34.172593 1180 log.go:172] (0xc000aaa500) (5) Data frame handling\nI0525 21:28:34.172647 1180 log.go:172] (0xc000aaa500) (5) Data frame sent\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0525 21:28:34.172949 1180 log.go:172] (0xc000a586e0) Data frame received for 3\nI0525 21:28:34.172967 1180 log.go:172] (0xc000aaa460) (3) Data frame handling\nI0525 21:28:34.172982 1180 log.go:172] (0xc000a586e0) Data frame received for 5\nI0525 21:28:34.172987 1180 log.go:172] (0xc000aaa500) (5) Data frame handling\nI0525 21:28:34.174972 1180 log.go:172] (0xc000a586e0) Data frame received for 1\nI0525 21:28:34.174989 1180 log.go:172] (0xc000a38280) (1) Data frame handling\nI0525 21:28:34.174997 1180 log.go:172] (0xc000a38280) (1) Data frame sent\nI0525 21:28:34.175005 1180 log.go:172] (0xc000a586e0) (0xc000a38280) Stream removed, broadcasting: 1\nI0525 21:28:34.175108 1180 log.go:172] (0xc000a586e0) Go away received\nI0525 21:28:34.175629 1180 log.go:172] (0xc000a586e0) (0xc000a38280) Stream removed, broadcasting: 1\nI0525 21:28:34.175646 1180 log.go:172] (0xc000a586e0) (0xc000aaa460) Stream removed, broadcasting: 3\nI0525 21:28:34.175652 1180 log.go:172] (0xc000a586e0) (0xc000aaa500) Stream removed, broadcasting: 5\n" May 25 21:28:34.181: INFO: stdout: "" May 25 21:28:34.182: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3702 execpod4sdv9 -- /bin/sh -x -c nc -zv -t -w 2 10.108.233.53 80' May 25 21:28:34.410: INFO: stderr: "I0525 21:28:34.316837 1200 log.go:172] (0xc00078ea50) (0xc0007be280) Create stream\nI0525 21:28:34.316910 1200 log.go:172] (0xc00078ea50) (0xc0007be280) Stream added, broadcasting: 1\nI0525 21:28:34.320508 1200 log.go:172] (0xc00078ea50) Reply frame received for 1\nI0525 21:28:34.320553 1200 log.go:172] (0xc00078ea50) (0xc000770000) Create stream\nI0525 21:28:34.320567 1200 log.go:172] (0xc00078ea50) (0xc000770000) Stream added, broadcasting: 3\nI0525 21:28:34.322059 1200 log.go:172] (0xc00078ea50) Reply frame received for 3\nI0525 21:28:34.322096 1200 log.go:172] (0xc00078ea50) (0xc0007be320) Create stream\nI0525 21:28:34.322105 1200 log.go:172] (0xc00078ea50) (0xc0007be320) Stream added, broadcasting: 5\nI0525 21:28:34.323692 1200 log.go:172] (0xc00078ea50) Reply frame received for 5\nI0525 21:28:34.404309 1200 log.go:172] (0xc00078ea50) Data frame received for 5\nI0525 21:28:34.404363 1200 log.go:172] (0xc0007be320) (5) Data frame handling\nI0525 21:28:34.404385 1200 log.go:172] (0xc0007be320) (5) Data frame sent\n+ nc -zv -t -w 2 10.108.233.53 80\nConnection to 10.108.233.53 80 port [tcp/http] succeeded!\nI0525 21:28:34.404405 1200 log.go:172] (0xc00078ea50) Data frame received for 5\nI0525 21:28:34.404420 1200 log.go:172] (0xc0007be320) (5) Data frame handling\nI0525 21:28:34.404441 1200 log.go:172] (0xc00078ea50) Data frame received for 3\nI0525 21:28:34.404451 1200 log.go:172] (0xc000770000) (3) Data frame handling\nI0525 21:28:34.406381 1200 log.go:172] (0xc00078ea50) Data frame received for 1\nI0525 21:28:34.406396 1200 log.go:172] (0xc0007be280) (1) Data frame handling\nI0525 21:28:34.406408 1200 log.go:172] (0xc0007be280) (1) Data frame sent\nI0525 21:28:34.406416 1200 log.go:172] (0xc00078ea50) (0xc0007be280) Stream removed, broadcasting: 1\nI0525 21:28:34.406476 1200 log.go:172] (0xc00078ea50) Go away received\nI0525 21:28:34.406644 1200 log.go:172] (0xc00078ea50) (0xc0007be280) Stream removed, broadcasting: 1\nI0525 21:28:34.406657 1200 log.go:172] (0xc00078ea50) (0xc000770000) Stream removed, broadcasting: 3\nI0525 21:28:34.406662 1200 log.go:172] (0xc00078ea50) (0xc0007be320) Stream removed, broadcasting: 5\n" May 25 21:28:34.410: INFO: stdout: "" May 25 21:28:34.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3702 execpod4sdv9 -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.10 30592' May 25 21:28:34.624: INFO: stderr: "I0525 21:28:34.544776 1222 log.go:172] (0xc000b3d3f0) (0xc00097a6e0) Create stream\nI0525 21:28:34.544829 1222 log.go:172] (0xc000b3d3f0) (0xc00097a6e0) Stream added, broadcasting: 1\nI0525 21:28:34.550711 1222 log.go:172] (0xc000b3d3f0) Reply frame received for 1\nI0525 21:28:34.550760 1222 log.go:172] (0xc000b3d3f0) (0xc0005ce640) Create stream\nI0525 21:28:34.550772 1222 log.go:172] (0xc000b3d3f0) (0xc0005ce640) Stream added, broadcasting: 3\nI0525 21:28:34.552017 1222 log.go:172] (0xc000b3d3f0) Reply frame received for 3\nI0525 21:28:34.552059 1222 log.go:172] (0xc000b3d3f0) (0xc000757400) Create stream\nI0525 21:28:34.552084 1222 log.go:172] (0xc000b3d3f0) (0xc000757400) Stream added, broadcasting: 5\nI0525 21:28:34.553427 1222 log.go:172] (0xc000b3d3f0) Reply frame received for 5\nI0525 21:28:34.618419 1222 log.go:172] (0xc000b3d3f0) Data frame received for 5\nI0525 21:28:34.618461 1222 log.go:172] (0xc000757400) (5) Data frame handling\nI0525 21:28:34.618473 1222 log.go:172] (0xc000757400) (5) Data frame sent\nI0525 21:28:34.618481 1222 log.go:172] (0xc000b3d3f0) Data frame received for 5\nI0525 21:28:34.618488 1222 log.go:172] (0xc000757400) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.10 30592\nConnection to 172.17.0.10 30592 port [tcp/30592] succeeded!\nI0525 21:28:34.618508 1222 log.go:172] (0xc000b3d3f0) Data frame received for 3\nI0525 21:28:34.618516 1222 log.go:172] (0xc0005ce640) (3) Data frame handling\nI0525 21:28:34.619934 1222 log.go:172] (0xc000b3d3f0) Data frame received for 1\nI0525 21:28:34.619959 1222 log.go:172] (0xc00097a6e0) (1) Data frame handling\nI0525 21:28:34.619980 1222 log.go:172] (0xc00097a6e0) (1) Data frame sent\nI0525 21:28:34.620142 1222 log.go:172] (0xc000b3d3f0) (0xc00097a6e0) Stream removed, broadcasting: 1\nI0525 21:28:34.620165 1222 log.go:172] (0xc000b3d3f0) Go away received\nI0525 21:28:34.620571 1222 log.go:172] (0xc000b3d3f0) (0xc00097a6e0) Stream removed, broadcasting: 1\nI0525 21:28:34.620597 1222 log.go:172] (0xc000b3d3f0) (0xc0005ce640) Stream removed, broadcasting: 3\nI0525 21:28:34.620606 1222 log.go:172] (0xc000b3d3f0) (0xc000757400) Stream removed, broadcasting: 5\n" May 25 21:28:34.625: INFO: stdout: "" May 25 21:28:34.625: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3702 execpod4sdv9 -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.8 30592' May 25 21:28:34.974: INFO: stderr: "I0525 21:28:34.760216 1244 log.go:172] (0xc0000f71e0) (0xc0006f19a0) Create stream\nI0525 21:28:34.760279 1244 log.go:172] (0xc0000f71e0) (0xc0006f19a0) Stream added, broadcasting: 1\nI0525 21:28:34.763586 1244 log.go:172] (0xc0000f71e0) Reply frame received for 1\nI0525 21:28:34.763627 1244 log.go:172] (0xc0000f71e0) (0xc00098e000) Create stream\nI0525 21:28:34.763643 1244 log.go:172] (0xc0000f71e0) (0xc00098e000) Stream added, broadcasting: 3\nI0525 21:28:34.764670 1244 log.go:172] (0xc0000f71e0) Reply frame received for 3\nI0525 21:28:34.764703 1244 log.go:172] (0xc0000f71e0) (0xc0006f1b80) Create stream\nI0525 21:28:34.764717 1244 log.go:172] (0xc0000f71e0) (0xc0006f1b80) Stream added, broadcasting: 5\nI0525 21:28:34.766169 1244 log.go:172] (0xc0000f71e0) Reply frame received for 5\nI0525 21:28:34.969443 1244 log.go:172] (0xc0000f71e0) Data frame received for 3\nI0525 21:28:34.969477 1244 log.go:172] (0xc0000f71e0) Data frame received for 5\nI0525 21:28:34.969506 1244 log.go:172] (0xc0006f1b80) (5) Data frame handling\nI0525 21:28:34.969536 1244 log.go:172] (0xc0006f1b80) (5) Data frame sent\nI0525 21:28:34.969550 1244 log.go:172] (0xc0000f71e0) Data frame received for 5\nI0525 21:28:34.969565 1244 log.go:172] (0xc0006f1b80) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.8 30592\nConnection to 172.17.0.8 30592 port [tcp/30592] succeeded!\nI0525 21:28:34.969590 1244 log.go:172] (0xc00098e000) (3) Data frame handling\nI0525 21:28:34.970862 1244 log.go:172] (0xc0000f71e0) Data frame received for 1\nI0525 21:28:34.970892 1244 log.go:172] (0xc0006f19a0) (1) Data frame handling\nI0525 21:28:34.970913 1244 log.go:172] (0xc0006f19a0) (1) Data frame sent\nI0525 21:28:34.970937 1244 log.go:172] (0xc0000f71e0) (0xc0006f19a0) Stream removed, broadcasting: 1\nI0525 21:28:34.970961 1244 log.go:172] (0xc0000f71e0) Go away received\nI0525 21:28:34.971219 1244 log.go:172] (0xc0000f71e0) (0xc0006f19a0) Stream removed, broadcasting: 1\nI0525 21:28:34.971234 1244 log.go:172] (0xc0000f71e0) (0xc00098e000) Stream removed, broadcasting: 3\nI0525 21:28:34.971243 1244 log.go:172] (0xc0000f71e0) (0xc0006f1b80) Stream removed, broadcasting: 5\n" May 25 21:28:34.974: INFO: stdout: "" [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:28:34.975: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3702" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:12.389 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":278,"completed":40,"skipped":700,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:28:34.983: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:28:35.081: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-1144" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":278,"completed":41,"skipped":713,"failed":0} SS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:28:35.090: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-upd-5e9c6cc1-ed8a-4153-9cee-cc0987dc502b STEP: Creating the pod STEP: Updating configmap configmap-test-upd-5e9c6cc1-ed8a-4153-9cee-cc0987dc502b STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:28:41.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1175" for this suite. • [SLOW TEST:6.265 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":42,"skipped":715,"failed":0} SS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:28:41.355: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on node default medium May 25 21:28:41.422: INFO: Waiting up to 5m0s for pod "pod-35a24203-2ea8-4504-b87a-b02f17ec397a" in namespace "emptydir-4425" to be "success or failure" May 25 21:28:41.426: INFO: Pod "pod-35a24203-2ea8-4504-b87a-b02f17ec397a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.23946ms May 25 21:28:43.431: INFO: Pod "pod-35a24203-2ea8-4504-b87a-b02f17ec397a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008982357s May 25 21:28:45.435: INFO: Pod "pod-35a24203-2ea8-4504-b87a-b02f17ec397a": Phase="Running", Reason="", readiness=true. Elapsed: 4.013060236s May 25 21:28:47.439: INFO: Pod "pod-35a24203-2ea8-4504-b87a-b02f17ec397a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.017127136s STEP: Saw pod success May 25 21:28:47.439: INFO: Pod "pod-35a24203-2ea8-4504-b87a-b02f17ec397a" satisfied condition "success or failure" May 25 21:28:47.442: INFO: Trying to get logs from node jerma-worker pod pod-35a24203-2ea8-4504-b87a-b02f17ec397a container test-container: STEP: delete the pod May 25 21:28:47.457: INFO: Waiting for pod pod-35a24203-2ea8-4504-b87a-b02f17ec397a to disappear May 25 21:28:47.461: INFO: Pod pod-35a24203-2ea8-4504-b87a-b02f17ec397a no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:28:47.461: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4425" for this suite. • [SLOW TEST:6.111 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":43,"skipped":717,"failed":0} SS ------------------------------ [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:28:47.467: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 25 21:28:47.650: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-73f860ac-08bb-4051-91f8-710254b60f3d" in namespace "security-context-test-8885" to be "success or failure" May 25 21:28:47.653: INFO: Pod "busybox-privileged-false-73f860ac-08bb-4051-91f8-710254b60f3d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.038605ms May 25 21:28:49.657: INFO: Pod "busybox-privileged-false-73f860ac-08bb-4051-91f8-710254b60f3d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007050432s May 25 21:28:51.661: INFO: Pod "busybox-privileged-false-73f860ac-08bb-4051-91f8-710254b60f3d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010735664s May 25 21:28:51.661: INFO: Pod "busybox-privileged-false-73f860ac-08bb-4051-91f8-710254b60f3d" satisfied condition "success or failure" May 25 21:28:51.668: INFO: Got logs for pod "busybox-privileged-false-73f860ac-08bb-4051-91f8-710254b60f3d": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:28:51.668: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-8885" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":44,"skipped":719,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:28:51.676: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 25 21:28:52.075: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. May 25 21:28:52.147: INFO: Number of nodes with available pods: 0 May 25 21:28:52.147: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. May 25 21:28:52.289: INFO: Number of nodes with available pods: 0 May 25 21:28:52.289: INFO: Node jerma-worker2 is running more than one daemon pod May 25 21:28:53.294: INFO: Number of nodes with available pods: 0 May 25 21:28:53.294: INFO: Node jerma-worker2 is running more than one daemon pod May 25 21:28:54.388: INFO: Number of nodes with available pods: 0 May 25 21:28:54.388: INFO: Node jerma-worker2 is running more than one daemon pod May 25 21:28:55.294: INFO: Number of nodes with available pods: 0 May 25 21:28:55.294: INFO: Node jerma-worker2 is running more than one daemon pod May 25 21:28:56.293: INFO: Number of nodes with available pods: 1 May 25 21:28:56.293: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled May 25 21:28:56.343: INFO: Number of nodes with available pods: 1 May 25 21:28:56.343: INFO: Number of running nodes: 0, number of available pods: 1 May 25 21:28:57.347: INFO: Number of nodes with available pods: 0 May 25 21:28:57.347: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate May 25 21:28:57.368: INFO: Number of nodes with available pods: 0 May 25 21:28:57.368: INFO: Node jerma-worker2 is running more than one daemon pod May 25 21:28:58.372: INFO: Number of nodes with available pods: 0 May 25 21:28:58.372: INFO: Node jerma-worker2 is running more than one daemon pod May 25 21:28:59.373: INFO: Number of nodes with available pods: 0 May 25 21:28:59.373: INFO: Node jerma-worker2 is running more than one daemon pod May 25 21:29:00.373: INFO: Number of nodes with available pods: 0 May 25 21:29:00.373: INFO: Node jerma-worker2 is running more than one daemon pod May 25 21:29:01.373: INFO: Number of nodes with available pods: 0 May 25 21:29:01.373: INFO: Node jerma-worker2 is running more than one daemon pod May 25 21:29:02.373: INFO: Number of nodes with available pods: 0 May 25 21:29:02.373: INFO: Node jerma-worker2 is running more than one daemon pod May 25 21:29:03.373: INFO: Number of nodes with available pods: 0 May 25 21:29:03.373: INFO: Node jerma-worker2 is running more than one daemon pod May 25 21:29:04.374: INFO: Number of nodes with available pods: 0 May 25 21:29:04.374: INFO: Node jerma-worker2 is running more than one daemon pod May 25 21:29:05.372: INFO: Number of nodes with available pods: 0 May 25 21:29:05.372: INFO: Node jerma-worker2 is running more than one daemon pod May 25 21:29:06.373: INFO: Number of nodes with available pods: 0 May 25 21:29:06.373: INFO: Node jerma-worker2 is running more than one daemon pod May 25 21:29:07.372: INFO: Number of nodes with available pods: 0 May 25 21:29:07.372: INFO: Node jerma-worker2 is running more than one daemon pod May 25 21:29:08.373: INFO: Number of nodes with available pods: 0 May 25 21:29:08.373: INFO: Node jerma-worker2 is running more than one daemon pod May 25 21:29:09.373: INFO: Number of nodes with available pods: 0 May 25 21:29:09.373: INFO: Node jerma-worker2 is running more than one daemon pod May 25 21:29:10.372: INFO: Number of nodes with available pods: 0 May 25 21:29:10.372: INFO: Node jerma-worker2 is running more than one daemon pod May 25 21:29:11.447: INFO: Number of nodes with available pods: 0 May 25 21:29:11.447: INFO: Node jerma-worker2 is running more than one daemon pod May 25 21:29:12.373: INFO: Number of nodes with available pods: 0 May 25 21:29:12.373: INFO: Node jerma-worker2 is running more than one daemon pod May 25 21:29:13.372: INFO: Number of nodes with available pods: 1 May 25 21:29:13.372: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4305, will wait for the garbage collector to delete the pods May 25 21:29:13.438: INFO: Deleting DaemonSet.extensions daemon-set took: 7.058531ms May 25 21:29:13.738: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.286269ms May 25 21:29:19.543: INFO: Number of nodes with available pods: 0 May 25 21:29:19.543: INFO: Number of running nodes: 0, number of available pods: 0 May 25 21:29:19.547: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4305/daemonsets","resourceVersion":"19115378"},"items":null} May 25 21:29:19.551: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4305/pods","resourceVersion":"19115378"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:29:19.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-4305" for this suite. • [SLOW TEST:27.916 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":278,"completed":45,"skipped":742,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:29:19.593: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: set up a multi version CRD May 25 21:29:19.693: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:29:36.175: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-7929" for this suite. • [SLOW TEST:16.590 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":278,"completed":46,"skipped":799,"failed":0} SS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:29:36.184: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on node default medium May 25 21:29:36.249: INFO: Waiting up to 5m0s for pod "pod-83d5f549-56fe-40e3-86a9-a010c52e090f" in namespace "emptydir-8744" to be "success or failure" May 25 21:29:36.253: INFO: Pod "pod-83d5f549-56fe-40e3-86a9-a010c52e090f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.558395ms May 25 21:29:38.258: INFO: Pod "pod-83d5f549-56fe-40e3-86a9-a010c52e090f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008166324s May 25 21:29:40.262: INFO: Pod "pod-83d5f549-56fe-40e3-86a9-a010c52e090f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01299186s STEP: Saw pod success May 25 21:29:40.262: INFO: Pod "pod-83d5f549-56fe-40e3-86a9-a010c52e090f" satisfied condition "success or failure" May 25 21:29:40.266: INFO: Trying to get logs from node jerma-worker pod pod-83d5f549-56fe-40e3-86a9-a010c52e090f container test-container: STEP: delete the pod May 25 21:29:40.329: INFO: Waiting for pod pod-83d5f549-56fe-40e3-86a9-a010c52e090f to disappear May 25 21:29:40.338: INFO: Pod pod-83d5f549-56fe-40e3-86a9-a010c52e090f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:29:40.338: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8744" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":47,"skipped":801,"failed":0} SS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:29:40.345: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 25 21:29:40.412: INFO: Waiting up to 5m0s for pod "downwardapi-volume-82e7e2ca-2975-4259-83a6-7d49b8ddd561" in namespace "projected-5563" to be "success or failure" May 25 21:29:40.415: INFO: Pod "downwardapi-volume-82e7e2ca-2975-4259-83a6-7d49b8ddd561": Phase="Pending", Reason="", readiness=false. Elapsed: 3.020344ms May 25 21:29:42.420: INFO: Pod "downwardapi-volume-82e7e2ca-2975-4259-83a6-7d49b8ddd561": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007832299s May 25 21:29:44.425: INFO: Pod "downwardapi-volume-82e7e2ca-2975-4259-83a6-7d49b8ddd561": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012859396s STEP: Saw pod success May 25 21:29:44.425: INFO: Pod "downwardapi-volume-82e7e2ca-2975-4259-83a6-7d49b8ddd561" satisfied condition "success or failure" May 25 21:29:44.428: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-82e7e2ca-2975-4259-83a6-7d49b8ddd561 container client-container: STEP: delete the pod May 25 21:29:44.453: INFO: Waiting for pod downwardapi-volume-82e7e2ca-2975-4259-83a6-7d49b8ddd561 to disappear May 25 21:29:44.513: INFO: Pod downwardapi-volume-82e7e2ca-2975-4259-83a6-7d49b8ddd561 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:29:44.513: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5563" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":48,"skipped":803,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:29:44.522: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:29:48.612: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-2830" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":278,"completed":49,"skipped":811,"failed":0} SSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:29:48.618: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test substitution in container's args May 25 21:29:48.699: INFO: Waiting up to 5m0s for pod "var-expansion-ebbd59d5-7f0a-472b-829b-b60d60131e4c" in namespace "var-expansion-9792" to be "success or failure" May 25 21:29:48.714: INFO: Pod "var-expansion-ebbd59d5-7f0a-472b-829b-b60d60131e4c": Phase="Pending", Reason="", readiness=false. Elapsed: 14.967601ms May 25 21:29:50.718: INFO: Pod "var-expansion-ebbd59d5-7f0a-472b-829b-b60d60131e4c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018873857s May 25 21:29:52.759: INFO: Pod "var-expansion-ebbd59d5-7f0a-472b-829b-b60d60131e4c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.059294823s STEP: Saw pod success May 25 21:29:52.759: INFO: Pod "var-expansion-ebbd59d5-7f0a-472b-829b-b60d60131e4c" satisfied condition "success or failure" May 25 21:29:52.761: INFO: Trying to get logs from node jerma-worker pod var-expansion-ebbd59d5-7f0a-472b-829b-b60d60131e4c container dapi-container: STEP: delete the pod May 25 21:29:52.783: INFO: Waiting for pod var-expansion-ebbd59d5-7f0a-472b-829b-b60d60131e4c to disappear May 25 21:29:52.787: INFO: Pod var-expansion-ebbd59d5-7f0a-472b-829b-b60d60131e4c no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:29:52.788: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-9792" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":278,"completed":50,"skipped":816,"failed":0} SSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:29:52.795: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 25 21:29:53.102: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:29:53.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-123" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":278,"completed":51,"skipped":820,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:29:53.382: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 25 21:29:53.462: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR May 25 21:29:54.207: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-25T21:29:54Z generation:1 name:name1 resourceVersion:19115631 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:76e78886-f746-4cbe-91c2-f3e6e0f44e57] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR May 25 21:30:04.227: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-25T21:30:04Z generation:1 name:name2 resourceVersion:19115680 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:49fed386-3b94-49a4-a04d-3fc15fa9711d] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR May 25 21:30:14.234: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-25T21:29:54Z generation:2 name:name1 resourceVersion:19115709 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:76e78886-f746-4cbe-91c2-f3e6e0f44e57] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR May 25 21:30:24.240: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-25T21:30:04Z generation:2 name:name2 resourceVersion:19115740 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:49fed386-3b94-49a4-a04d-3fc15fa9711d] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR May 25 21:30:34.248: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-25T21:29:54Z generation:2 name:name1 resourceVersion:19115768 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:76e78886-f746-4cbe-91c2-f3e6e0f44e57] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR May 25 21:30:44.256: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-25T21:30:04Z generation:2 name:name2 resourceVersion:19115798 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:49fed386-3b94-49a4-a04d-3fc15fa9711d] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:30:54.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-1710" for this suite. • [SLOW TEST:61.393 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:41 watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":278,"completed":52,"skipped":856,"failed":0} [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:30:54.775: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 25 21:30:54.833: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:30:55.422: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-2103" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":278,"completed":53,"skipped":856,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:30:55.429: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-6fa3aa94-b574-4ba5-9fab-2e5fa1dc6221 STEP: Creating a pod to test consume configMaps May 25 21:30:55.548: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b5fb2350-6500-47c6-99fb-fce06cdaeecc" in namespace "projected-3129" to be "success or failure" May 25 21:30:55.561: INFO: Pod "pod-projected-configmaps-b5fb2350-6500-47c6-99fb-fce06cdaeecc": Phase="Pending", Reason="", readiness=false. Elapsed: 13.383785ms May 25 21:30:57.564: INFO: Pod "pod-projected-configmaps-b5fb2350-6500-47c6-99fb-fce06cdaeecc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016420219s May 25 21:30:59.568: INFO: Pod "pod-projected-configmaps-b5fb2350-6500-47c6-99fb-fce06cdaeecc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.020793914s May 25 21:31:01.573: INFO: Pod "pod-projected-configmaps-b5fb2350-6500-47c6-99fb-fce06cdaeecc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.025885154s STEP: Saw pod success May 25 21:31:01.573: INFO: Pod "pod-projected-configmaps-b5fb2350-6500-47c6-99fb-fce06cdaeecc" satisfied condition "success or failure" May 25 21:31:01.577: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-b5fb2350-6500-47c6-99fb-fce06cdaeecc container projected-configmap-volume-test: STEP: delete the pod May 25 21:31:01.598: INFO: Waiting for pod pod-projected-configmaps-b5fb2350-6500-47c6-99fb-fce06cdaeecc to disappear May 25 21:31:01.620: INFO: Pod pod-projected-configmaps-b5fb2350-6500-47c6-99fb-fce06cdaeecc no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:31:01.620: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3129" for this suite. • [SLOW TEST:6.199 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":278,"completed":54,"skipped":882,"failed":0} [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:31:01.628: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating pod May 25 21:31:05.734: INFO: Pod pod-hostip-aa235348-f294-4410-807c-5a9373a111f9 has hostIP: 172.17.0.8 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:31:05.734: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5676" for this suite. •{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":278,"completed":55,"skipped":882,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:31:05.742: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 25 21:31:05.832: INFO: Waiting up to 5m0s for pod "downwardapi-volume-67e5cbc4-9a14-44a6-8145-3fd0b4fd9757" in namespace "projected-1846" to be "success or failure" May 25 21:31:05.854: INFO: Pod "downwardapi-volume-67e5cbc4-9a14-44a6-8145-3fd0b4fd9757": Phase="Pending", Reason="", readiness=false. Elapsed: 21.633614ms May 25 21:31:07.874: INFO: Pod "downwardapi-volume-67e5cbc4-9a14-44a6-8145-3fd0b4fd9757": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041631074s May 25 21:31:09.878: INFO: Pod "downwardapi-volume-67e5cbc4-9a14-44a6-8145-3fd0b4fd9757": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.046197564s STEP: Saw pod success May 25 21:31:09.878: INFO: Pod "downwardapi-volume-67e5cbc4-9a14-44a6-8145-3fd0b4fd9757" satisfied condition "success or failure" May 25 21:31:09.882: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-67e5cbc4-9a14-44a6-8145-3fd0b4fd9757 container client-container: STEP: delete the pod May 25 21:31:09.904: INFO: Waiting for pod downwardapi-volume-67e5cbc4-9a14-44a6-8145-3fd0b4fd9757 to disappear May 25 21:31:09.908: INFO: Pod downwardapi-volume-67e5cbc4-9a14-44a6-8145-3fd0b4fd9757 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:31:09.908: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1846" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":56,"skipped":898,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:31:09.916: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-1629 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-1629 STEP: Creating statefulset with conflicting port in namespace statefulset-1629 STEP: Waiting until pod test-pod will start running in namespace statefulset-1629 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-1629 May 25 21:31:14.273: INFO: Observed stateful pod in namespace: statefulset-1629, name: ss-0, uid: 20ec9227-7ac7-487d-bdd3-f49d8b2bb75b, status phase: Pending. Waiting for statefulset controller to delete. May 25 21:31:15.269: INFO: Observed stateful pod in namespace: statefulset-1629, name: ss-0, uid: 20ec9227-7ac7-487d-bdd3-f49d8b2bb75b, status phase: Failed. Waiting for statefulset controller to delete. May 25 21:31:15.276: INFO: Observed stateful pod in namespace: statefulset-1629, name: ss-0, uid: 20ec9227-7ac7-487d-bdd3-f49d8b2bb75b, status phase: Failed. Waiting for statefulset controller to delete. May 25 21:31:15.297: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-1629 STEP: Removing pod with conflicting port in namespace statefulset-1629 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-1629 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 May 25 21:31:19.367: INFO: Deleting all statefulset in ns statefulset-1629 May 25 21:31:19.370: INFO: Scaling statefulset ss to 0 May 25 21:31:39.386: INFO: Waiting for statefulset status.replicas updated to 0 May 25 21:31:39.390: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:31:39.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-1629" for this suite. • [SLOW TEST:29.493 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":278,"completed":57,"skipped":914,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:31:39.409: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap configmap-9940/configmap-test-cf9ac6ff-fd7f-4946-add3-c0f64341104a STEP: Creating a pod to test consume configMaps May 25 21:31:39.504: INFO: Waiting up to 5m0s for pod "pod-configmaps-dc1a9e8f-2715-4af0-9967-e90d15c4e417" in namespace "configmap-9940" to be "success or failure" May 25 21:31:39.524: INFO: Pod "pod-configmaps-dc1a9e8f-2715-4af0-9967-e90d15c4e417": Phase="Pending", Reason="", readiness=false. Elapsed: 19.428973ms May 25 21:31:41.652: INFO: Pod "pod-configmaps-dc1a9e8f-2715-4af0-9967-e90d15c4e417": Phase="Pending", Reason="", readiness=false. Elapsed: 2.148292054s May 25 21:31:43.657: INFO: Pod "pod-configmaps-dc1a9e8f-2715-4af0-9967-e90d15c4e417": Phase="Running", Reason="", readiness=true. Elapsed: 4.153045111s May 25 21:31:45.662: INFO: Pod "pod-configmaps-dc1a9e8f-2715-4af0-9967-e90d15c4e417": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.157440331s STEP: Saw pod success May 25 21:31:45.662: INFO: Pod "pod-configmaps-dc1a9e8f-2715-4af0-9967-e90d15c4e417" satisfied condition "success or failure" May 25 21:31:45.665: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-dc1a9e8f-2715-4af0-9967-e90d15c4e417 container env-test: STEP: delete the pod May 25 21:31:45.690: INFO: Waiting for pod pod-configmaps-dc1a9e8f-2715-4af0-9967-e90d15c4e417 to disappear May 25 21:31:45.694: INFO: Pod pod-configmaps-dc1a9e8f-2715-4af0-9967-e90d15c4e417 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:31:45.694: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9940" for this suite. • [SLOW TEST:6.290 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":58,"skipped":927,"failed":0} S ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:31:45.700: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on tmpfs May 25 21:31:45.826: INFO: Waiting up to 5m0s for pod "pod-9da25de6-d1c7-457e-9290-a048045a2e83" in namespace "emptydir-9006" to be "success or failure" May 25 21:31:45.831: INFO: Pod "pod-9da25de6-d1c7-457e-9290-a048045a2e83": Phase="Pending", Reason="", readiness=false. Elapsed: 5.281859ms May 25 21:31:47.953: INFO: Pod "pod-9da25de6-d1c7-457e-9290-a048045a2e83": Phase="Pending", Reason="", readiness=false. Elapsed: 2.126459118s May 25 21:31:49.956: INFO: Pod "pod-9da25de6-d1c7-457e-9290-a048045a2e83": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.130141949s STEP: Saw pod success May 25 21:31:49.956: INFO: Pod "pod-9da25de6-d1c7-457e-9290-a048045a2e83" satisfied condition "success or failure" May 25 21:31:49.959: INFO: Trying to get logs from node jerma-worker pod pod-9da25de6-d1c7-457e-9290-a048045a2e83 container test-container: STEP: delete the pod May 25 21:31:50.018: INFO: Waiting for pod pod-9da25de6-d1c7-457e-9290-a048045a2e83 to disappear May 25 21:31:50.023: INFO: Pod pod-9da25de6-d1c7-457e-9290-a048045a2e83 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:31:50.023: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9006" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":59,"skipped":928,"failed":0} S ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:31:50.031: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap that has name configmap-test-emptyKey-935fa237-d8f5-4d3f-829c-980ccf173f07 [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:31:50.091: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9955" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":278,"completed":60,"skipped":929,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:31:50.098: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 25 21:31:50.151: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b1375dcf-5e28-4267-a4a4-63070aab9746" in namespace "downward-api-1942" to be "success or failure" May 25 21:31:50.198: INFO: Pod "downwardapi-volume-b1375dcf-5e28-4267-a4a4-63070aab9746": Phase="Pending", Reason="", readiness=false. Elapsed: 46.891494ms May 25 21:31:52.234: INFO: Pod "downwardapi-volume-b1375dcf-5e28-4267-a4a4-63070aab9746": Phase="Pending", Reason="", readiness=false. Elapsed: 2.083070326s May 25 21:31:54.238: INFO: Pod "downwardapi-volume-b1375dcf-5e28-4267-a4a4-63070aab9746": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.087569998s STEP: Saw pod success May 25 21:31:54.238: INFO: Pod "downwardapi-volume-b1375dcf-5e28-4267-a4a4-63070aab9746" satisfied condition "success or failure" May 25 21:31:54.242: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-b1375dcf-5e28-4267-a4a4-63070aab9746 container client-container: STEP: delete the pod May 25 21:31:54.427: INFO: Waiting for pod downwardapi-volume-b1375dcf-5e28-4267-a4a4-63070aab9746 to disappear May 25 21:31:54.551: INFO: Pod downwardapi-volume-b1375dcf-5e28-4267-a4a4-63070aab9746 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:31:54.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1942" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":61,"skipped":941,"failed":0} S ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:31:54.560: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars May 25 21:31:54.637: INFO: Waiting up to 5m0s for pod "downward-api-eed69f3c-1b29-4533-b646-f94ed9d8cc77" in namespace "downward-api-7137" to be "success or failure" May 25 21:31:54.640: INFO: Pod "downward-api-eed69f3c-1b29-4533-b646-f94ed9d8cc77": Phase="Pending", Reason="", readiness=false. Elapsed: 2.195853ms May 25 21:31:56.725: INFO: Pod "downward-api-eed69f3c-1b29-4533-b646-f94ed9d8cc77": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0878137s May 25 21:31:58.730: INFO: Pod "downward-api-eed69f3c-1b29-4533-b646-f94ed9d8cc77": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.092901447s STEP: Saw pod success May 25 21:31:58.730: INFO: Pod "downward-api-eed69f3c-1b29-4533-b646-f94ed9d8cc77" satisfied condition "success or failure" May 25 21:31:58.734: INFO: Trying to get logs from node jerma-worker2 pod downward-api-eed69f3c-1b29-4533-b646-f94ed9d8cc77 container dapi-container: STEP: delete the pod May 25 21:31:58.829: INFO: Waiting for pod downward-api-eed69f3c-1b29-4533-b646-f94ed9d8cc77 to disappear May 25 21:31:58.838: INFO: Pod downward-api-eed69f3c-1b29-4533-b646-f94ed9d8cc77 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:31:58.838: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7137" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":278,"completed":62,"skipped":942,"failed":0} ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:31:58.845: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod busybox-9d4308c9-6502-419f-be4a-65cb30f59f73 in namespace container-probe-9423 May 25 21:32:02.941: INFO: Started pod busybox-9d4308c9-6502-419f-be4a-65cb30f59f73 in namespace container-probe-9423 STEP: checking the pod's current state and verifying that restartCount is present May 25 21:32:02.943: INFO: Initial restart count of pod busybox-9d4308c9-6502-419f-be4a-65cb30f59f73 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:36:03.614: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9423" for this suite. • [SLOW TEST:244.805 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":63,"skipped":942,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:36:03.651: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod May 25 21:36:07.772: INFO: &Pod{ObjectMeta:{send-events-21c3cee5-8c78-40a4-8875-89e292eaefaf events-6867 /api/v1/namespaces/events-6867/pods/send-events-21c3cee5-8c78-40a4-8875-89e292eaefaf 0fc0a9a0-e161-4654-a935-0e6b381b60ec 19117111 0 2020-05-25 21:36:03 +0000 UTC map[name:foo time:743822635] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-fc5rt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-fc5rt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-fc5rt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 21:36:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 21:36:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 21:36:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 21:36:03 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.67,StartTime:2020-05-25 21:36:03 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-25 21:36:06 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://de0b7f666ed3aacfcfcb3e88ef78f7f74055b4733d1969c4813af715fdd1ff2a,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.67,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod May 25 21:36:09.776: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod May 25 21:36:11.784: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:36:11.789: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-6867" for this suite. • [SLOW TEST:8.199 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":278,"completed":64,"skipped":951,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:36:11.850: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars May 25 21:36:11.995: INFO: Waiting up to 5m0s for pod "downward-api-c4b1c805-3bbb-4c16-980d-cf10df512973" in namespace "downward-api-6841" to be "success or failure" May 25 21:36:12.005: INFO: Pod "downward-api-c4b1c805-3bbb-4c16-980d-cf10df512973": Phase="Pending", Reason="", readiness=false. Elapsed: 10.554169ms May 25 21:36:14.009: INFO: Pod "downward-api-c4b1c805-3bbb-4c16-980d-cf10df512973": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014755716s May 25 21:36:16.013: INFO: Pod "downward-api-c4b1c805-3bbb-4c16-980d-cf10df512973": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018586101s STEP: Saw pod success May 25 21:36:16.013: INFO: Pod "downward-api-c4b1c805-3bbb-4c16-980d-cf10df512973" satisfied condition "success or failure" May 25 21:36:16.016: INFO: Trying to get logs from node jerma-worker2 pod downward-api-c4b1c805-3bbb-4c16-980d-cf10df512973 container dapi-container: STEP: delete the pod May 25 21:36:16.049: INFO: Waiting for pod downward-api-c4b1c805-3bbb-4c16-980d-cf10df512973 to disappear May 25 21:36:16.094: INFO: Pod downward-api-c4b1c805-3bbb-4c16-980d-cf10df512973 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:36:16.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6841" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":278,"completed":65,"skipped":988,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:36:16.102: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-map-c0791216-cf22-442e-8d50-f7b38feef3cb STEP: Creating a pod to test consume secrets May 25 21:36:16.164: INFO: Waiting up to 5m0s for pod "pod-secrets-f2f8672a-9d08-4b10-a225-72500d752fb5" in namespace "secrets-2282" to be "success or failure" May 25 21:36:16.166: INFO: Pod "pod-secrets-f2f8672a-9d08-4b10-a225-72500d752fb5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.319078ms May 25 21:36:18.202: INFO: Pod "pod-secrets-f2f8672a-9d08-4b10-a225-72500d752fb5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037799685s May 25 21:36:20.206: INFO: Pod "pod-secrets-f2f8672a-9d08-4b10-a225-72500d752fb5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.042127966s STEP: Saw pod success May 25 21:36:20.206: INFO: Pod "pod-secrets-f2f8672a-9d08-4b10-a225-72500d752fb5" satisfied condition "success or failure" May 25 21:36:20.210: INFO: Trying to get logs from node jerma-worker pod pod-secrets-f2f8672a-9d08-4b10-a225-72500d752fb5 container secret-volume-test: STEP: delete the pod May 25 21:36:20.240: INFO: Waiting for pod pod-secrets-f2f8672a-9d08-4b10-a225-72500d752fb5 to disappear May 25 21:36:20.257: INFO: Pod pod-secrets-f2f8672a-9d08-4b10-a225-72500d752fb5 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:36:20.257: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2282" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":66,"skipped":996,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:36:20.290: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-1bc17d6e-d439-4a00-91fc-33909042749d STEP: Creating a pod to test consume secrets May 25 21:36:20.382: INFO: Waiting up to 5m0s for pod "pod-secrets-6f44a80c-aa0b-4ea4-b1c5-82d19157426c" in namespace "secrets-2241" to be "success or failure" May 25 21:36:20.427: INFO: Pod "pod-secrets-6f44a80c-aa0b-4ea4-b1c5-82d19157426c": Phase="Pending", Reason="", readiness=false. Elapsed: 44.713014ms May 25 21:36:22.430: INFO: Pod "pod-secrets-6f44a80c-aa0b-4ea4-b1c5-82d19157426c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04866227s May 25 21:36:24.435: INFO: Pod "pod-secrets-6f44a80c-aa0b-4ea4-b1c5-82d19157426c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.052984452s STEP: Saw pod success May 25 21:36:24.435: INFO: Pod "pod-secrets-6f44a80c-aa0b-4ea4-b1c5-82d19157426c" satisfied condition "success or failure" May 25 21:36:24.437: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-6f44a80c-aa0b-4ea4-b1c5-82d19157426c container secret-env-test: STEP: delete the pod May 25 21:36:24.469: INFO: Waiting for pod pod-secrets-6f44a80c-aa0b-4ea4-b1c5-82d19157426c to disappear May 25 21:36:24.482: INFO: Pod pod-secrets-6f44a80c-aa0b-4ea4-b1c5-82d19157426c no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:36:24.482: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2241" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":278,"completed":67,"skipped":1004,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:36:24.489: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod May 25 21:36:29.106: INFO: Successfully updated pod "labelsupdate6e5b42ec-d05f-40ab-92ef-6f00f5d9ff81" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:36:33.130: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4839" for this suite. • [SLOW TEST:8.650 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":68,"skipped":1025,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:36:33.139: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 May 25 21:36:33.214: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the sample API server. May 25 21:36:33.923: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set May 25 21:36:36.446: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726039394, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726039394, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726039394, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726039393, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 25 21:36:38.449: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726039394, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726039394, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726039394, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726039393, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 25 21:36:41.280: INFO: Waited 771.733728ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:36:41.647: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-9398" for this suite. • [SLOW TEST:8.629 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]","total":278,"completed":69,"skipped":1031,"failed":0} SSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:36:41.769: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 25 21:36:42.131: INFO: Pod name cleanup-pod: Found 0 pods out of 1 May 25 21:36:47.135: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 25 21:36:47.135: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 May 25 21:36:47.166: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-6158 /apis/apps/v1/namespaces/deployment-6158/deployments/test-cleanup-deployment 2d015476-1184-446c-86fa-2087863e77a8 19117436 1 2020-05-25 21:36:47 +0000 UTC map[name:cleanup-pod] map[] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc001d67938 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},} May 25 21:36:47.172: INFO: New ReplicaSet "test-cleanup-deployment-55ffc6b7b6" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6 deployment-6158 /apis/apps/v1/namespaces/deployment-6158/replicasets/test-cleanup-deployment-55ffc6b7b6 58424d62-1084-4ec1-a3e1-03c1bbbda3d9 19117438 1 2020-05-25 21:36:47 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment 2d015476-1184-446c-86fa-2087863e77a8 0xc003bac3b7 0xc003bac3b8}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55ffc6b7b6,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003bac428 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 25 21:36:47.172: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": May 25 21:36:47.172: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller deployment-6158 /apis/apps/v1/namespaces/deployment-6158/replicasets/test-cleanup-controller da381193-c9e0-4ee8-ab05-1242eddbb923 19117437 1 2020-05-25 21:36:41 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment 2d015476-1184-446c-86fa-2087863e77a8 0xc003bac2e7 0xc003bac2e8}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc003bac348 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} May 25 21:36:47.191: INFO: Pod "test-cleanup-controller-87mbg" is available: &Pod{ObjectMeta:{test-cleanup-controller-87mbg test-cleanup-controller- deployment-6158 /api/v1/namespaces/deployment-6158/pods/test-cleanup-controller-87mbg a6fbe05c-266d-41d6-9e30-68e5ba65a617 19117414 0 2020-05-25 21:36:42 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 ReplicaSet test-cleanup-controller da381193-c9e0-4ee8-ab05-1242eddbb923 0xc0044fefb7 0xc0044fefb8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5ghdv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5ghdv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5ghdv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 21:36:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 21:36:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 21:36:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 21:36:42 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.114,StartTime:2020-05-25 21:36:42 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-25 21:36:45 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://ddf6fae53930a0e20b6031941370e61a33fc21c51199d3652d04d210d8bde459,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.114,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 25 21:36:47.191: INFO: Pod "test-cleanup-deployment-55ffc6b7b6-bj6hb" is not available: &Pod{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6-bj6hb test-cleanup-deployment-55ffc6b7b6- deployment-6158 /api/v1/namespaces/deployment-6158/pods/test-cleanup-deployment-55ffc6b7b6-bj6hb e4995734-b15b-468a-a96a-cf912df06048 19117444 0 2020-05-25 21:36:47 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-55ffc6b7b6 58424d62-1084-4ec1-a3e1-03c1bbbda3d9 0xc0044ff167 0xc0044ff168}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5ghdv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5ghdv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5ghdv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 21:36:47 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:36:47.191: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-6158" for this suite. • [SLOW TEST:5.511 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":278,"completed":70,"skipped":1035,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:36:47.281: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 25 21:36:47.351: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties May 25 21:36:50.265: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3775 create -f -' May 25 21:36:55.793: INFO: stderr: "" May 25 21:36:55.793: INFO: stdout: "e2e-test-crd-publish-openapi-9595-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" May 25 21:36:55.793: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3775 delete e2e-test-crd-publish-openapi-9595-crds test-cr' May 25 21:36:55.903: INFO: stderr: "" May 25 21:36:55.903: INFO: stdout: "e2e-test-crd-publish-openapi-9595-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" May 25 21:36:55.903: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3775 apply -f -' May 25 21:36:56.768: INFO: stderr: "" May 25 21:36:56.768: INFO: stdout: "e2e-test-crd-publish-openapi-9595-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" May 25 21:36:56.768: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3775 delete e2e-test-crd-publish-openapi-9595-crds test-cr' May 25 21:36:56.880: INFO: stderr: "" May 25 21:36:56.880: INFO: stdout: "e2e-test-crd-publish-openapi-9595-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR May 25 21:36:56.880: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9595-crds' May 25 21:36:57.119: INFO: stderr: "" May 25 21:36:57.119: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-9595-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:37:00.022: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-3775" for this suite. • [SLOW TEST:12.747 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":278,"completed":71,"skipped":1048,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:37:00.030: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test env composition May 25 21:37:00.123: INFO: Waiting up to 5m0s for pod "var-expansion-eab72e6e-ef08-4905-9ce5-7c73d6b4cfdf" in namespace "var-expansion-7085" to be "success or failure" May 25 21:37:00.143: INFO: Pod "var-expansion-eab72e6e-ef08-4905-9ce5-7c73d6b4cfdf": Phase="Pending", Reason="", readiness=false. Elapsed: 20.146936ms May 25 21:37:02.251: INFO: Pod "var-expansion-eab72e6e-ef08-4905-9ce5-7c73d6b4cfdf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.12811565s May 25 21:37:04.255: INFO: Pod "var-expansion-eab72e6e-ef08-4905-9ce5-7c73d6b4cfdf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.132061737s STEP: Saw pod success May 25 21:37:04.255: INFO: Pod "var-expansion-eab72e6e-ef08-4905-9ce5-7c73d6b4cfdf" satisfied condition "success or failure" May 25 21:37:04.258: INFO: Trying to get logs from node jerma-worker2 pod var-expansion-eab72e6e-ef08-4905-9ce5-7c73d6b4cfdf container dapi-container: STEP: delete the pod May 25 21:37:04.300: INFO: Waiting for pod var-expansion-eab72e6e-ef08-4905-9ce5-7c73d6b4cfdf to disappear May 25 21:37:04.328: INFO: Pod var-expansion-eab72e6e-ef08-4905-9ce5-7c73d6b4cfdf no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:37:04.328: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-7085" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":278,"completed":72,"skipped":1118,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:37:04.335: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook May 25 21:37:12.496: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 25 21:37:12.501: INFO: Pod pod-with-prestop-http-hook still exists May 25 21:37:14.501: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 25 21:37:14.526: INFO: Pod pod-with-prestop-http-hook still exists May 25 21:37:16.501: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 25 21:37:16.520: INFO: Pod pod-with-prestop-http-hook still exists May 25 21:37:18.501: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 25 21:37:18.506: INFO: Pod pod-with-prestop-http-hook still exists May 25 21:37:20.501: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 25 21:37:20.505: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:37:20.512: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-4079" for this suite. • [SLOW TEST:16.184 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":278,"completed":73,"skipped":1129,"failed":0} SSSSS ------------------------------ [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:37:20.519: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-1317 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-1317 STEP: creating replication controller externalsvc in namespace services-1317 I0525 21:37:20.832124 6 runners.go:189] Created replication controller with name: externalsvc, namespace: services-1317, replica count: 2 I0525 21:37:23.882590 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0525 21:37:26.882829 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName May 25 21:37:27.087: INFO: Creating new exec pod May 25 21:37:31.174: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-1317 execpodqxl4t -- /bin/sh -x -c nslookup clusterip-service' May 25 21:37:31.611: INFO: stderr: "I0525 21:37:31.305360 1382 log.go:172] (0xc0000f5340) (0xc0006ddc20) Create stream\nI0525 21:37:31.305417 1382 log.go:172] (0xc0000f5340) (0xc0006ddc20) Stream added, broadcasting: 1\nI0525 21:37:31.307943 1382 log.go:172] (0xc0000f5340) Reply frame received for 1\nI0525 21:37:31.307985 1382 log.go:172] (0xc0000f5340) (0xc00090e000) Create stream\nI0525 21:37:31.307998 1382 log.go:172] (0xc0000f5340) (0xc00090e000) Stream added, broadcasting: 3\nI0525 21:37:31.308847 1382 log.go:172] (0xc0000f5340) Reply frame received for 3\nI0525 21:37:31.308881 1382 log.go:172] (0xc0000f5340) (0xc000286000) Create stream\nI0525 21:37:31.308896 1382 log.go:172] (0xc0000f5340) (0xc000286000) Stream added, broadcasting: 5\nI0525 21:37:31.310101 1382 log.go:172] (0xc0000f5340) Reply frame received for 5\nI0525 21:37:31.441443 1382 log.go:172] (0xc0000f5340) Data frame received for 5\nI0525 21:37:31.441648 1382 log.go:172] (0xc000286000) (5) Data frame handling\nI0525 21:37:31.441674 1382 log.go:172] (0xc000286000) (5) Data frame sent\n+ nslookup clusterip-service\nI0525 21:37:31.602244 1382 log.go:172] (0xc0000f5340) Data frame received for 3\nI0525 21:37:31.602282 1382 log.go:172] (0xc00090e000) (3) Data frame handling\nI0525 21:37:31.602308 1382 log.go:172] (0xc00090e000) (3) Data frame sent\nI0525 21:37:31.603368 1382 log.go:172] (0xc0000f5340) Data frame received for 3\nI0525 21:37:31.603396 1382 log.go:172] (0xc00090e000) (3) Data frame handling\nI0525 21:37:31.603425 1382 log.go:172] (0xc00090e000) (3) Data frame sent\nI0525 21:37:31.604097 1382 log.go:172] (0xc0000f5340) Data frame received for 3\nI0525 21:37:31.604122 1382 log.go:172] (0xc00090e000) (3) Data frame handling\nI0525 21:37:31.604785 1382 log.go:172] (0xc0000f5340) Data frame received for 5\nI0525 21:37:31.604801 1382 log.go:172] (0xc000286000) (5) Data frame handling\nI0525 21:37:31.606399 1382 log.go:172] (0xc0000f5340) Data frame received for 1\nI0525 21:37:31.606414 1382 log.go:172] (0xc0006ddc20) (1) Data frame handling\nI0525 21:37:31.606426 1382 log.go:172] (0xc0006ddc20) (1) Data frame sent\nI0525 21:37:31.606445 1382 log.go:172] (0xc0000f5340) (0xc0006ddc20) Stream removed, broadcasting: 1\nI0525 21:37:31.606723 1382 log.go:172] (0xc0000f5340) (0xc0006ddc20) Stream removed, broadcasting: 1\nI0525 21:37:31.606754 1382 log.go:172] (0xc0000f5340) Go away received\nI0525 21:37:31.606781 1382 log.go:172] (0xc0000f5340) (0xc00090e000) Stream removed, broadcasting: 3\nI0525 21:37:31.606799 1382 log.go:172] (0xc0000f5340) (0xc000286000) Stream removed, broadcasting: 5\n" May 25 21:37:31.611: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-1317.svc.cluster.local\tcanonical name = externalsvc.services-1317.svc.cluster.local.\nName:\texternalsvc.services-1317.svc.cluster.local\nAddress: 10.111.119.145\n\n" STEP: deleting ReplicationController externalsvc in namespace services-1317, will wait for the garbage collector to delete the pods May 25 21:37:31.672: INFO: Deleting ReplicationController externalsvc took: 7.196486ms May 25 21:37:31.972: INFO: Terminating ReplicationController externalsvc pods took: 300.264369ms May 25 21:37:39.631: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:37:39.646: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1317" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:19.140 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":278,"completed":74,"skipped":1134,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:37:39.660: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name cm-test-opt-del-bdadf57e-816b-4540-8cfa-7dcabf2a31f6 STEP: Creating configMap with name cm-test-opt-upd-fc6912cd-73df-4d8b-b09a-d9570929368a STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-bdadf57e-816b-4540-8cfa-7dcabf2a31f6 STEP: Updating configmap cm-test-opt-upd-fc6912cd-73df-4d8b-b09a-d9570929368a STEP: Creating configMap with name cm-test-opt-create-8782ec4e-b7ba-4efa-8726-3a233cdae9a5 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:37:49.844: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3714" for this suite. • [SLOW TEST:10.192 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":75,"skipped":1143,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:37:49.852: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 25 21:37:49.938: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties May 25 21:37:52.878: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1793 create -f -' May 25 21:37:58.034: INFO: stderr: "" May 25 21:37:58.034: INFO: stdout: "e2e-test-crd-publish-openapi-8224-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" May 25 21:37:58.034: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1793 delete e2e-test-crd-publish-openapi-8224-crds test-cr' May 25 21:37:58.146: INFO: stderr: "" May 25 21:37:58.146: INFO: stdout: "e2e-test-crd-publish-openapi-8224-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" May 25 21:37:58.146: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1793 apply -f -' May 25 21:37:58.578: INFO: stderr: "" May 25 21:37:58.579: INFO: stdout: "e2e-test-crd-publish-openapi-8224-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" May 25 21:37:58.579: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1793 delete e2e-test-crd-publish-openapi-8224-crds test-cr' May 25 21:37:58.685: INFO: stderr: "" May 25 21:37:58.685: INFO: stdout: "e2e-test-crd-publish-openapi-8224-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR May 25 21:37:58.685: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8224-crds' May 25 21:37:58.966: INFO: stderr: "" May 25 21:37:58.966: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-8224-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:38:00.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1793" for this suite. • [SLOW TEST:10.995 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":278,"completed":76,"skipped":1152,"failed":0} SS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:38:00.847: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 25 21:38:00.890: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:38:02.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-7101" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":278,"completed":77,"skipped":1154,"failed":0} SSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:38:02.109: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: getting the auto-created API token May 25 21:38:02.714: INFO: created pod pod-service-account-defaultsa May 25 21:38:02.715: INFO: pod pod-service-account-defaultsa service account token volume mount: true May 25 21:38:02.718: INFO: created pod pod-service-account-mountsa May 25 21:38:02.719: INFO: pod pod-service-account-mountsa service account token volume mount: true May 25 21:38:02.738: INFO: created pod pod-service-account-nomountsa May 25 21:38:02.738: INFO: pod pod-service-account-nomountsa service account token volume mount: false May 25 21:38:02.755: INFO: created pod pod-service-account-defaultsa-mountspec May 25 21:38:02.755: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true May 25 21:38:02.888: INFO: created pod pod-service-account-mountsa-mountspec May 25 21:38:02.888: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true May 25 21:38:02.923: INFO: created pod pod-service-account-nomountsa-mountspec May 25 21:38:02.923: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true May 25 21:38:03.055: INFO: created pod pod-service-account-defaultsa-nomountspec May 25 21:38:03.055: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false May 25 21:38:03.073: INFO: created pod pod-service-account-mountsa-nomountspec May 25 21:38:03.073: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false May 25 21:38:03.091: INFO: created pod pod-service-account-nomountsa-nomountspec May 25 21:38:03.091: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:38:03.091: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-4993" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":278,"completed":78,"skipped":1158,"failed":0} SS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:38:03.258: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 25 21:38:03.366: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:38:17.708: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9138" for this suite. • [SLOW TEST:14.457 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":278,"completed":79,"skipped":1160,"failed":0} [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:38:17.715: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir volume type on tmpfs May 25 21:38:17.867: INFO: Waiting up to 5m0s for pod "pod-d9556325-fdaf-4a1c-a5ef-e1de6c8d3dff" in namespace "emptydir-650" to be "success or failure" May 25 21:38:17.874: INFO: Pod "pod-d9556325-fdaf-4a1c-a5ef-e1de6c8d3dff": Phase="Pending", Reason="", readiness=false. Elapsed: 7.419352ms May 25 21:38:19.879: INFO: Pod "pod-d9556325-fdaf-4a1c-a5ef-e1de6c8d3dff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01171187s May 25 21:38:21.883: INFO: Pod "pod-d9556325-fdaf-4a1c-a5ef-e1de6c8d3dff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016284347s STEP: Saw pod success May 25 21:38:21.883: INFO: Pod "pod-d9556325-fdaf-4a1c-a5ef-e1de6c8d3dff" satisfied condition "success or failure" May 25 21:38:21.886: INFO: Trying to get logs from node jerma-worker pod pod-d9556325-fdaf-4a1c-a5ef-e1de6c8d3dff container test-container: STEP: delete the pod May 25 21:38:21.912: INFO: Waiting for pod pod-d9556325-fdaf-4a1c-a5ef-e1de6c8d3dff to disappear May 25 21:38:21.916: INFO: Pod pod-d9556325-fdaf-4a1c-a5ef-e1de6c8d3dff no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:38:21.916: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-650" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":80,"skipped":1160,"failed":0} S ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:38:21.924: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 25 21:38:26.213: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:38:26.352: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-2953" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":81,"skipped":1161,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:38:26.361: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on node default medium May 25 21:38:26.413: INFO: Waiting up to 5m0s for pod "pod-3745ae25-e46f-443f-9138-7f018a7a36fe" in namespace "emptydir-3391" to be "success or failure" May 25 21:38:26.449: INFO: Pod "pod-3745ae25-e46f-443f-9138-7f018a7a36fe": Phase="Pending", Reason="", readiness=false. Elapsed: 36.028713ms May 25 21:38:28.454: INFO: Pod "pod-3745ae25-e46f-443f-9138-7f018a7a36fe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040629735s May 25 21:38:30.458: INFO: Pod "pod-3745ae25-e46f-443f-9138-7f018a7a36fe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.044580236s STEP: Saw pod success May 25 21:38:30.458: INFO: Pod "pod-3745ae25-e46f-443f-9138-7f018a7a36fe" satisfied condition "success or failure" May 25 21:38:30.461: INFO: Trying to get logs from node jerma-worker2 pod pod-3745ae25-e46f-443f-9138-7f018a7a36fe container test-container: STEP: delete the pod May 25 21:38:30.491: INFO: Waiting for pod pod-3745ae25-e46f-443f-9138-7f018a7a36fe to disappear May 25 21:38:30.527: INFO: Pod pod-3745ae25-e46f-443f-9138-7f018a7a36fe no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:38:30.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3391" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":82,"skipped":1194,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:38:30.535: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 25 21:38:31.252: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 25 21:38:33.260: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726039511, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726039511, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726039511, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726039511, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 25 21:38:36.290: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 25 21:38:36.294: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:38:37.457: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4230" for this suite. STEP: Destroying namespace "webhook-4230-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.033 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":278,"completed":83,"skipped":1200,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:38:37.569: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 25 21:38:37.666: INFO: Creating ReplicaSet my-hostname-basic-46302be2-428d-4a2a-8933-62dbf1075be3 May 25 21:38:37.725: INFO: Pod name my-hostname-basic-46302be2-428d-4a2a-8933-62dbf1075be3: Found 0 pods out of 1 May 25 21:38:42.729: INFO: Pod name my-hostname-basic-46302be2-428d-4a2a-8933-62dbf1075be3: Found 1 pods out of 1 May 25 21:38:42.729: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-46302be2-428d-4a2a-8933-62dbf1075be3" is running May 25 21:38:42.731: INFO: Pod "my-hostname-basic-46302be2-428d-4a2a-8933-62dbf1075be3-fl7c2" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-25 21:38:37 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-25 21:38:41 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-25 21:38:41 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-25 21:38:37 +0000 UTC Reason: Message:}]) May 25 21:38:42.731: INFO: Trying to dial the pod May 25 21:38:47.744: INFO: Controller my-hostname-basic-46302be2-428d-4a2a-8933-62dbf1075be3: Got expected result from replica 1 [my-hostname-basic-46302be2-428d-4a2a-8933-62dbf1075be3-fl7c2]: "my-hostname-basic-46302be2-428d-4a2a-8933-62dbf1075be3-fl7c2", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:38:47.745: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-9352" for this suite. • [SLOW TEST:10.185 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":278,"completed":84,"skipped":1254,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:38:47.754: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-2aae9f35-a305-4e7b-b4e1-75145cb70e95 STEP: Creating a pod to test consume secrets May 25 21:38:47.894: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-bc8d0f25-50dd-4354-a0c3-e1dc50a38a6f" in namespace "projected-2325" to be "success or failure" May 25 21:38:47.922: INFO: Pod "pod-projected-secrets-bc8d0f25-50dd-4354-a0c3-e1dc50a38a6f": Phase="Pending", Reason="", readiness=false. Elapsed: 28.620455ms May 25 21:38:49.927: INFO: Pod "pod-projected-secrets-bc8d0f25-50dd-4354-a0c3-e1dc50a38a6f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03341014s May 25 21:38:51.932: INFO: Pod "pod-projected-secrets-bc8d0f25-50dd-4354-a0c3-e1dc50a38a6f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.03806106s STEP: Saw pod success May 25 21:38:51.932: INFO: Pod "pod-projected-secrets-bc8d0f25-50dd-4354-a0c3-e1dc50a38a6f" satisfied condition "success or failure" May 25 21:38:51.935: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-bc8d0f25-50dd-4354-a0c3-e1dc50a38a6f container projected-secret-volume-test: STEP: delete the pod May 25 21:38:51.971: INFO: Waiting for pod pod-projected-secrets-bc8d0f25-50dd-4354-a0c3-e1dc50a38a6f to disappear May 25 21:38:51.996: INFO: Pod pod-projected-secrets-bc8d0f25-50dd-4354-a0c3-e1dc50a38a6f no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:38:51.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2325" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":85,"skipped":1260,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:38:52.005: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-1161 STEP: creating a selector STEP: Creating the service pods in kubernetes May 25 21:38:52.123: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 25 21:39:18.272: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.80 8081 | grep -v '^\s*$'] Namespace:pod-network-test-1161 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 25 21:39:18.272: INFO: >>> kubeConfig: /root/.kube/config I0525 21:39:18.308203 6 log.go:172] (0xc004a1c370) (0xc0027be5a0) Create stream I0525 21:39:18.308253 6 log.go:172] (0xc004a1c370) (0xc0027be5a0) Stream added, broadcasting: 1 I0525 21:39:18.312084 6 log.go:172] (0xc004a1c370) Reply frame received for 1 I0525 21:39:18.312152 6 log.go:172] (0xc004a1c370) (0xc0027be640) Create stream I0525 21:39:18.312202 6 log.go:172] (0xc004a1c370) (0xc0027be640) Stream added, broadcasting: 3 I0525 21:39:18.313417 6 log.go:172] (0xc004a1c370) Reply frame received for 3 I0525 21:39:18.313463 6 log.go:172] (0xc004a1c370) (0xc0027be6e0) Create stream I0525 21:39:18.313503 6 log.go:172] (0xc004a1c370) (0xc0027be6e0) Stream added, broadcasting: 5 I0525 21:39:18.314506 6 log.go:172] (0xc004a1c370) Reply frame received for 5 I0525 21:39:19.415793 6 log.go:172] (0xc004a1c370) Data frame received for 3 I0525 21:39:19.415840 6 log.go:172] (0xc0027be640) (3) Data frame handling I0525 21:39:19.415862 6 log.go:172] (0xc0027be640) (3) Data frame sent I0525 21:39:19.415881 6 log.go:172] (0xc004a1c370) Data frame received for 3 I0525 21:39:19.415920 6 log.go:172] (0xc0027be640) (3) Data frame handling I0525 21:39:19.415958 6 log.go:172] (0xc004a1c370) Data frame received for 5 I0525 21:39:19.415983 6 log.go:172] (0xc0027be6e0) (5) Data frame handling I0525 21:39:19.418409 6 log.go:172] (0xc004a1c370) Data frame received for 1 I0525 21:39:19.418442 6 log.go:172] (0xc0027be5a0) (1) Data frame handling I0525 21:39:19.418479 6 log.go:172] (0xc0027be5a0) (1) Data frame sent I0525 21:39:19.418505 6 log.go:172] (0xc004a1c370) (0xc0027be5a0) Stream removed, broadcasting: 1 I0525 21:39:19.418536 6 log.go:172] (0xc004a1c370) Go away received I0525 21:39:19.419114 6 log.go:172] (0xc004a1c370) (0xc0027be5a0) Stream removed, broadcasting: 1 I0525 21:39:19.419138 6 log.go:172] (0xc004a1c370) (0xc0027be640) Stream removed, broadcasting: 3 I0525 21:39:19.419149 6 log.go:172] (0xc004a1c370) (0xc0027be6e0) Stream removed, broadcasting: 5 May 25 21:39:19.419: INFO: Found all expected endpoints: [netserver-0] May 25 21:39:19.422: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.128 8081 | grep -v '^\s*$'] Namespace:pod-network-test-1161 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 25 21:39:19.422: INFO: >>> kubeConfig: /root/.kube/config I0525 21:39:19.451798 6 log.go:172] (0xc001b1b080) (0xc0021cc280) Create stream I0525 21:39:19.451826 6 log.go:172] (0xc001b1b080) (0xc0021cc280) Stream added, broadcasting: 1 I0525 21:39:19.455157 6 log.go:172] (0xc001b1b080) Reply frame received for 1 I0525 21:39:19.455224 6 log.go:172] (0xc001b1b080) (0xc001a44000) Create stream I0525 21:39:19.455251 6 log.go:172] (0xc001b1b080) (0xc001a44000) Stream added, broadcasting: 3 I0525 21:39:19.456393 6 log.go:172] (0xc001b1b080) Reply frame received for 3 I0525 21:39:19.456430 6 log.go:172] (0xc001b1b080) (0xc0021cc320) Create stream I0525 21:39:19.456445 6 log.go:172] (0xc001b1b080) (0xc0021cc320) Stream added, broadcasting: 5 I0525 21:39:19.457871 6 log.go:172] (0xc001b1b080) Reply frame received for 5 I0525 21:39:20.541492 6 log.go:172] (0xc001b1b080) Data frame received for 3 I0525 21:39:20.541586 6 log.go:172] (0xc001a44000) (3) Data frame handling I0525 21:39:20.541617 6 log.go:172] (0xc001a44000) (3) Data frame sent I0525 21:39:20.541745 6 log.go:172] (0xc001b1b080) Data frame received for 5 I0525 21:39:20.541780 6 log.go:172] (0xc0021cc320) (5) Data frame handling I0525 21:39:20.542043 6 log.go:172] (0xc001b1b080) Data frame received for 3 I0525 21:39:20.542062 6 log.go:172] (0xc001a44000) (3) Data frame handling I0525 21:39:20.543341 6 log.go:172] (0xc001b1b080) Data frame received for 1 I0525 21:39:20.543375 6 log.go:172] (0xc0021cc280) (1) Data frame handling I0525 21:39:20.543425 6 log.go:172] (0xc0021cc280) (1) Data frame sent I0525 21:39:20.543491 6 log.go:172] (0xc001b1b080) (0xc0021cc280) Stream removed, broadcasting: 1 I0525 21:39:20.543640 6 log.go:172] (0xc001b1b080) (0xc0021cc280) Stream removed, broadcasting: 1 I0525 21:39:20.543671 6 log.go:172] (0xc001b1b080) (0xc001a44000) Stream removed, broadcasting: 3 I0525 21:39:20.543704 6 log.go:172] (0xc001b1b080) (0xc0021cc320) Stream removed, broadcasting: 5 May 25 21:39:20.543: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 I0525 21:39:20.543795 6 log.go:172] (0xc001b1b080) Go away received May 25 21:39:20.543: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-1161" for this suite. • [SLOW TEST:28.549 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":86,"skipped":1279,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:39:20.554: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 25 21:39:21.456: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 25 21:39:23.530: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726039561, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726039561, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726039561, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726039561, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 25 21:39:25.535: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726039561, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726039561, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726039561, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726039561, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 25 21:39:28.896: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 25 21:39:28.900: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-5458-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:39:30.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8894" for this suite. STEP: Destroying namespace "webhook-8894-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.778 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":278,"completed":87,"skipped":1286,"failed":0} S ------------------------------ [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:39:30.332: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Starting the proxy May 25 21:39:30.375: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix089974351/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:39:30.443: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3537" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":278,"completed":88,"skipped":1287,"failed":0} SSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:39:30.453: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:39:37.063: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-1854" for this suite. STEP: Destroying namespace "nsdeletetest-2647" for this suite. May 25 21:39:37.075: INFO: Namespace nsdeletetest-2647 was already deleted STEP: Destroying namespace "nsdeletetest-4635" for this suite. • [SLOW TEST:6.626 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":278,"completed":89,"skipped":1291,"failed":0} SSSSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:39:37.079: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service endpoint-test2 in namespace services-7725 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-7725 to expose endpoints map[] May 25 21:39:37.256: INFO: Get endpoints failed (2.716625ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found May 25 21:39:38.264: INFO: successfully validated that service endpoint-test2 in namespace services-7725 exposes endpoints map[] (1.011400213s elapsed) STEP: Creating pod pod1 in namespace services-7725 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-7725 to expose endpoints map[pod1:[80]] May 25 21:39:42.315: INFO: successfully validated that service endpoint-test2 in namespace services-7725 exposes endpoints map[pod1:[80]] (4.04265889s elapsed) STEP: Creating pod pod2 in namespace services-7725 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-7725 to expose endpoints map[pod1:[80] pod2:[80]] May 25 21:39:46.461: INFO: successfully validated that service endpoint-test2 in namespace services-7725 exposes endpoints map[pod1:[80] pod2:[80]] (4.142659246s elapsed) STEP: Deleting pod pod1 in namespace services-7725 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-7725 to expose endpoints map[pod2:[80]] May 25 21:39:47.534: INFO: successfully validated that service endpoint-test2 in namespace services-7725 exposes endpoints map[pod2:[80]] (1.067893164s elapsed) STEP: Deleting pod pod2 in namespace services-7725 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-7725 to expose endpoints map[] May 25 21:39:48.598: INFO: successfully validated that service endpoint-test2 in namespace services-7725 exposes endpoints map[] (1.05983785s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:39:48.662: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7725" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:11.617 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":278,"completed":90,"skipped":1301,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:39:48.697: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-downwardapi-kqgg STEP: Creating a pod to test atomic-volume-subpath May 25 21:39:48.819: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-kqgg" in namespace "subpath-3415" to be "success or failure" May 25 21:39:48.823: INFO: Pod "pod-subpath-test-downwardapi-kqgg": Phase="Pending", Reason="", readiness=false. Elapsed: 3.927882ms May 25 21:39:50.828: INFO: Pod "pod-subpath-test-downwardapi-kqgg": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00857779s May 25 21:39:52.832: INFO: Pod "pod-subpath-test-downwardapi-kqgg": Phase="Running", Reason="", readiness=true. Elapsed: 4.012910627s May 25 21:39:54.836: INFO: Pod "pod-subpath-test-downwardapi-kqgg": Phase="Running", Reason="", readiness=true. Elapsed: 6.017067248s May 25 21:39:56.840: INFO: Pod "pod-subpath-test-downwardapi-kqgg": Phase="Running", Reason="", readiness=true. Elapsed: 8.020772857s May 25 21:39:58.844: INFO: Pod "pod-subpath-test-downwardapi-kqgg": Phase="Running", Reason="", readiness=true. Elapsed: 10.0249278s May 25 21:40:00.848: INFO: Pod "pod-subpath-test-downwardapi-kqgg": Phase="Running", Reason="", readiness=true. Elapsed: 12.028898302s May 25 21:40:02.852: INFO: Pod "pod-subpath-test-downwardapi-kqgg": Phase="Running", Reason="", readiness=true. Elapsed: 14.032909391s May 25 21:40:04.856: INFO: Pod "pod-subpath-test-downwardapi-kqgg": Phase="Running", Reason="", readiness=true. Elapsed: 16.036858006s May 25 21:40:06.860: INFO: Pod "pod-subpath-test-downwardapi-kqgg": Phase="Running", Reason="", readiness=true. Elapsed: 18.040735416s May 25 21:40:08.864: INFO: Pod "pod-subpath-test-downwardapi-kqgg": Phase="Running", Reason="", readiness=true. Elapsed: 20.044990495s May 25 21:40:10.869: INFO: Pod "pod-subpath-test-downwardapi-kqgg": Phase="Running", Reason="", readiness=true. Elapsed: 22.049758696s May 25 21:40:12.874: INFO: Pod "pod-subpath-test-downwardapi-kqgg": Phase="Running", Reason="", readiness=true. Elapsed: 24.054381552s May 25 21:40:14.878: INFO: Pod "pod-subpath-test-downwardapi-kqgg": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.058839707s STEP: Saw pod success May 25 21:40:14.878: INFO: Pod "pod-subpath-test-downwardapi-kqgg" satisfied condition "success or failure" May 25 21:40:14.881: INFO: Trying to get logs from node jerma-worker2 pod pod-subpath-test-downwardapi-kqgg container test-container-subpath-downwardapi-kqgg: STEP: delete the pod May 25 21:40:14.930: INFO: Waiting for pod pod-subpath-test-downwardapi-kqgg to disappear May 25 21:40:14.942: INFO: Pod pod-subpath-test-downwardapi-kqgg no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-kqgg May 25 21:40:14.942: INFO: Deleting pod "pod-subpath-test-downwardapi-kqgg" in namespace "subpath-3415" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:40:14.944: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-3415" for this suite. • [SLOW TEST:26.254 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":278,"completed":91,"skipped":1332,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:40:14.951: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes May 25 21:40:15.059: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice May 25 21:40:24.148: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:40:24.152: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6802" for this suite. • [SLOW TEST:9.209 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":278,"completed":92,"skipped":1373,"failed":0} [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:40:24.160: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 25 21:40:24.253: INFO: Waiting up to 5m0s for pod "downwardapi-volume-52c83b76-b933-48da-9482-891703ef2af5" in namespace "downward-api-3737" to be "success or failure" May 25 21:40:24.256: INFO: Pod "downwardapi-volume-52c83b76-b933-48da-9482-891703ef2af5": Phase="Pending", Reason="", readiness=false. Elapsed: 3.015102ms May 25 21:40:26.262: INFO: Pod "downwardapi-volume-52c83b76-b933-48da-9482-891703ef2af5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009394609s May 25 21:40:28.266: INFO: Pod "downwardapi-volume-52c83b76-b933-48da-9482-891703ef2af5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013510201s STEP: Saw pod success May 25 21:40:28.267: INFO: Pod "downwardapi-volume-52c83b76-b933-48da-9482-891703ef2af5" satisfied condition "success or failure" May 25 21:40:28.269: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-52c83b76-b933-48da-9482-891703ef2af5 container client-container: STEP: delete the pod May 25 21:40:28.416: INFO: Waiting for pod downwardapi-volume-52c83b76-b933-48da-9482-891703ef2af5 to disappear May 25 21:40:28.466: INFO: Pod downwardapi-volume-52c83b76-b933-48da-9482-891703ef2af5 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:40:28.466: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3737" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":93,"skipped":1373,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:40:28.504: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 25 21:40:28.719: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3da670be-b592-4271-816d-8319575ade50" in namespace "projected-4264" to be "success or failure" May 25 21:40:28.723: INFO: Pod "downwardapi-volume-3da670be-b592-4271-816d-8319575ade50": Phase="Pending", Reason="", readiness=false. Elapsed: 3.760337ms May 25 21:40:30.726: INFO: Pod "downwardapi-volume-3da670be-b592-4271-816d-8319575ade50": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007344275s May 25 21:40:32.731: INFO: Pod "downwardapi-volume-3da670be-b592-4271-816d-8319575ade50": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011908423s STEP: Saw pod success May 25 21:40:32.731: INFO: Pod "downwardapi-volume-3da670be-b592-4271-816d-8319575ade50" satisfied condition "success or failure" May 25 21:40:32.734: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-3da670be-b592-4271-816d-8319575ade50 container client-container: STEP: delete the pod May 25 21:40:32.772: INFO: Waiting for pod downwardapi-volume-3da670be-b592-4271-816d-8319575ade50 to disappear May 25 21:40:32.812: INFO: Pod downwardapi-volume-3da670be-b592-4271-816d-8319575ade50 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:40:32.812: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4264" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":94,"skipped":1404,"failed":0} SSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:40:32.821: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1585 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 25 21:40:32.989: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-5530' May 25 21:40:33.093: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 25 21:40:33.093: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n" STEP: verifying the rc e2e-test-httpd-rc was created May 25 21:40:33.106: INFO: Waiting for rc e2e-test-httpd-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0 May 25 21:40:33.118: INFO: Waiting for rc e2e-test-httpd-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 STEP: rolling-update to same image controller May 25 21:40:33.141: INFO: scanned /root for discovery docs: May 25 21:40:33.141: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-httpd-rc --update-period=1s --image=docker.io/library/httpd:2.4.38-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-5530' May 25 21:40:50.041: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" May 25 21:40:50.041: INFO: stdout: "Created e2e-test-httpd-rc-7d9cd98c345012d4c29c74206784c354\nScaling up e2e-test-httpd-rc-7d9cd98c345012d4c29c74206784c354 from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-7d9cd98c345012d4c29c74206784c354 up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-7d9cd98c345012d4c29c74206784c354 to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n" May 25 21:40:50.041: INFO: stdout: "Created e2e-test-httpd-rc-7d9cd98c345012d4c29c74206784c354\nScaling up e2e-test-httpd-rc-7d9cd98c345012d4c29c74206784c354 from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-7d9cd98c345012d4c29c74206784c354 up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-7d9cd98c345012d4c29c74206784c354 to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-httpd-rc pods to come up. May 25 21:40:50.042: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-httpd-rc --namespace=kubectl-5530' May 25 21:40:50.134: INFO: stderr: "" May 25 21:40:50.134: INFO: stdout: "e2e-test-httpd-rc-7d9cd98c345012d4c29c74206784c354-j65r4 " May 25 21:40:50.134: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-7d9cd98c345012d4c29c74206784c354-j65r4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-httpd-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5530' May 25 21:40:50.235: INFO: stderr: "" May 25 21:40:50.235: INFO: stdout: "true" May 25 21:40:50.235: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-7d9cd98c345012d4c29c74206784c354-j65r4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-httpd-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5530' May 25 21:40:50.336: INFO: stderr: "" May 25 21:40:50.336: INFO: stdout: "docker.io/library/httpd:2.4.38-alpine" May 25 21:40:50.336: INFO: e2e-test-httpd-rc-7d9cd98c345012d4c29c74206784c354-j65r4 is verified up and running [AfterEach] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1591 May 25 21:40:50.336: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-5530' May 25 21:40:50.439: INFO: stderr: "" May 25 21:40:50.439: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:40:50.439: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5530" for this suite. • [SLOW TEST:17.683 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1580 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image [Conformance]","total":278,"completed":95,"skipped":1410,"failed":0} SS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:40:50.504: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 25 21:40:51.831: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 25 21:40:53.850: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726039651, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726039651, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726039651, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726039651, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 25 21:40:56.886: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:40:56.993: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3401" for this suite. STEP: Destroying namespace "webhook-3401-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.636 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":278,"completed":96,"skipped":1412,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:40:57.142: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 25 21:40:57.919: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 25 21:41:00.306: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726039657, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726039657, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726039658, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726039657, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 25 21:41:02.308: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726039657, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726039657, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726039658, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726039657, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 25 21:41:05.341: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:41:05.350: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3173" for this suite. STEP: Destroying namespace "webhook-3173-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.313 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":278,"completed":97,"skipped":1422,"failed":0} SSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:41:05.456: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 25 21:41:05.527: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-8606 I0525 21:41:05.537630 6 runners.go:189] Created replication controller with name: svc-latency-rc, namespace: svc-latency-8606, replica count: 1 I0525 21:41:06.588035 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0525 21:41:07.588289 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0525 21:41:08.588504 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 25 21:41:08.718: INFO: Created: latency-svc-xpfwv May 25 21:41:08.732: INFO: Got endpoints: latency-svc-xpfwv [43.912251ms] May 25 21:41:08.767: INFO: Created: latency-svc-lx2vp May 25 21:41:08.777: INFO: Got endpoints: latency-svc-lx2vp [44.902953ms] May 25 21:41:08.811: INFO: Created: latency-svc-swbgv May 25 21:41:08.819: INFO: Got endpoints: latency-svc-swbgv [86.25974ms] May 25 21:41:08.842: INFO: Created: latency-svc-kll5t May 25 21:41:08.857: INFO: Got endpoints: latency-svc-kll5t [123.937098ms] May 25 21:41:08.909: INFO: Created: latency-svc-qgn6l May 25 21:41:08.924: INFO: Got endpoints: latency-svc-qgn6l [189.860965ms] May 25 21:41:08.957: INFO: Created: latency-svc-9g4bw May 25 21:41:08.970: INFO: Got endpoints: latency-svc-9g4bw [236.575559ms] May 25 21:41:09.051: INFO: Created: latency-svc-4vxpp May 25 21:41:09.056: INFO: Got endpoints: latency-svc-4vxpp [321.849028ms] May 25 21:41:09.091: INFO: Created: latency-svc-qftpv May 25 21:41:09.103: INFO: Got endpoints: latency-svc-qftpv [368.248792ms] May 25 21:41:09.121: INFO: Created: latency-svc-7whmn May 25 21:41:09.133: INFO: Got endpoints: latency-svc-7whmn [397.848533ms] May 25 21:41:09.200: INFO: Created: latency-svc-lzwtw May 25 21:41:09.229: INFO: Created: latency-svc-s6f5g May 25 21:41:09.229: INFO: Got endpoints: latency-svc-lzwtw [493.89354ms] May 25 21:41:09.253: INFO: Got endpoints: latency-svc-s6f5g [517.936895ms] May 25 21:41:09.285: INFO: Created: latency-svc-fmlgc May 25 21:41:09.295: INFO: Got endpoints: latency-svc-fmlgc [559.849102ms] May 25 21:41:09.354: INFO: Created: latency-svc-26sfm May 25 21:41:09.368: INFO: Got endpoints: latency-svc-26sfm [631.475405ms] May 25 21:41:09.395: INFO: Created: latency-svc-4sjjq May 25 21:41:09.404: INFO: Got endpoints: latency-svc-4sjjq [667.998324ms] May 25 21:41:09.426: INFO: Created: latency-svc-8tv25 May 25 21:41:09.487: INFO: Got endpoints: latency-svc-8tv25 [750.78047ms] May 25 21:41:09.489: INFO: Created: latency-svc-hfgls May 25 21:41:09.494: INFO: Got endpoints: latency-svc-hfgls [757.260732ms] May 25 21:41:09.518: INFO: Created: latency-svc-r2v9v May 25 21:41:09.531: INFO: Got endpoints: latency-svc-r2v9v [753.702413ms] May 25 21:41:09.553: INFO: Created: latency-svc-p2vjp May 25 21:41:09.567: INFO: Got endpoints: latency-svc-p2vjp [747.761487ms] May 25 21:41:09.587: INFO: Created: latency-svc-j8tzg May 25 21:41:09.655: INFO: Got endpoints: latency-svc-j8tzg [797.458128ms] May 25 21:41:09.656: INFO: Created: latency-svc-glxsd May 25 21:41:09.664: INFO: Got endpoints: latency-svc-glxsd [740.253556ms] May 25 21:41:09.691: INFO: Created: latency-svc-m26gj May 25 21:41:09.719: INFO: Got endpoints: latency-svc-m26gj [749.143359ms] May 25 21:41:09.745: INFO: Created: latency-svc-c5svs May 25 21:41:09.793: INFO: Got endpoints: latency-svc-c5svs [736.844706ms] May 25 21:41:09.816: INFO: Created: latency-svc-m2t4n May 25 21:41:09.833: INFO: Got endpoints: latency-svc-m2t4n [730.213797ms] May 25 21:41:09.864: INFO: Created: latency-svc-c99bt May 25 21:41:09.937: INFO: Got endpoints: latency-svc-c99bt [803.755469ms] May 25 21:41:09.955: INFO: Created: latency-svc-95cq4 May 25 21:41:10.001: INFO: Got endpoints: latency-svc-95cq4 [772.240379ms] May 25 21:41:10.098: INFO: Created: latency-svc-87wpm May 25 21:41:10.103: INFO: Got endpoints: latency-svc-87wpm [849.407973ms] May 25 21:41:10.134: INFO: Created: latency-svc-hggcs May 25 21:41:10.146: INFO: Got endpoints: latency-svc-hggcs [850.310452ms] May 25 21:41:10.183: INFO: Created: latency-svc-dbs4s May 25 21:41:10.230: INFO: Got endpoints: latency-svc-dbs4s [862.458979ms] May 25 21:41:10.236: INFO: Created: latency-svc-98zv2 May 25 21:41:10.265: INFO: Got endpoints: latency-svc-98zv2 [861.285621ms] May 25 21:41:10.290: INFO: Created: latency-svc-rmzmz May 25 21:41:10.302: INFO: Got endpoints: latency-svc-rmzmz [815.040874ms] May 25 21:41:10.319: INFO: Created: latency-svc-cmf48 May 25 21:41:10.386: INFO: Got endpoints: latency-svc-cmf48 [891.5561ms] May 25 21:41:10.389: INFO: Created: latency-svc-4kz4f May 25 21:41:10.399: INFO: Got endpoints: latency-svc-4kz4f [868.118176ms] May 25 21:41:10.423: INFO: Created: latency-svc-6d8dh May 25 21:41:10.434: INFO: Got endpoints: latency-svc-6d8dh [867.088901ms] May 25 21:41:10.459: INFO: Created: latency-svc-8hkh7 May 25 21:41:10.554: INFO: Got endpoints: latency-svc-8hkh7 [898.660762ms] May 25 21:41:10.597: INFO: Created: latency-svc-mx5vs May 25 21:41:10.616: INFO: Got endpoints: latency-svc-mx5vs [952.268794ms] May 25 21:41:10.697: INFO: Created: latency-svc-cj7lh May 25 21:41:10.711: INFO: Got endpoints: latency-svc-cj7lh [991.865407ms] May 25 21:41:10.751: INFO: Created: latency-svc-vnk47 May 25 21:41:10.766: INFO: Got endpoints: latency-svc-vnk47 [972.632868ms] May 25 21:41:10.841: INFO: Created: latency-svc-rvq5k May 25 21:41:10.856: INFO: Got endpoints: latency-svc-rvq5k [1.022610946s] May 25 21:41:10.885: INFO: Created: latency-svc-rz7wr May 25 21:41:10.898: INFO: Got endpoints: latency-svc-rz7wr [961.103097ms] May 25 21:41:11.028: INFO: Created: latency-svc-clndv May 25 21:41:11.041: INFO: Got endpoints: latency-svc-clndv [1.04021481s] May 25 21:41:11.099: INFO: Created: latency-svc-fw5pl May 25 21:41:11.120: INFO: Got endpoints: latency-svc-fw5pl [1.01762675s] May 25 21:41:11.195: INFO: Created: latency-svc-p2zk7 May 25 21:41:11.211: INFO: Got endpoints: latency-svc-p2zk7 [1.064906688s] May 25 21:41:11.233: INFO: Created: latency-svc-d5nzv May 25 21:41:11.246: INFO: Got endpoints: latency-svc-d5nzv [1.015955097s] May 25 21:41:11.273: INFO: Created: latency-svc-sg95p May 25 21:41:11.291: INFO: Got endpoints: latency-svc-sg95p [1.025550121s] May 25 21:41:11.351: INFO: Created: latency-svc-ffdkz May 25 21:41:11.371: INFO: Got endpoints: latency-svc-ffdkz [1.068779317s] May 25 21:41:11.448: INFO: Created: latency-svc-44f4d May 25 21:41:11.483: INFO: Got endpoints: latency-svc-44f4d [1.097141345s] May 25 21:41:11.507: INFO: Created: latency-svc-h9c87 May 25 21:41:11.523: INFO: Got endpoints: latency-svc-h9c87 [1.123911594s] May 25 21:41:11.549: INFO: Created: latency-svc-5wmxs May 25 21:41:11.560: INFO: Got endpoints: latency-svc-5wmxs [1.125269595s] May 25 21:41:11.620: INFO: Created: latency-svc-5ld42 May 25 21:41:11.622: INFO: Got endpoints: latency-svc-5ld42 [1.06828219s] May 25 21:41:11.676: INFO: Created: latency-svc-f8t67 May 25 21:41:11.703: INFO: Got endpoints: latency-svc-f8t67 [1.086234827s] May 25 21:41:11.777: INFO: Created: latency-svc-kdbls May 25 21:41:11.788: INFO: Got endpoints: latency-svc-kdbls [1.076715791s] May 25 21:41:11.819: INFO: Created: latency-svc-c5vh8 May 25 21:41:11.831: INFO: Got endpoints: latency-svc-c5vh8 [1.065037317s] May 25 21:41:11.856: INFO: Created: latency-svc-6jr9h May 25 21:41:11.900: INFO: Got endpoints: latency-svc-6jr9h [1.044479606s] May 25 21:41:11.928: INFO: Created: latency-svc-xnbtj May 25 21:41:11.945: INFO: Got endpoints: latency-svc-xnbtj [1.047057703s] May 25 21:41:11.976: INFO: Created: latency-svc-5gmlc May 25 21:41:12.044: INFO: Got endpoints: latency-svc-5gmlc [1.002700294s] May 25 21:41:12.088: INFO: Created: latency-svc-2g7ws May 25 21:41:12.107: INFO: Got endpoints: latency-svc-2g7ws [986.366178ms] May 25 21:41:12.132: INFO: Created: latency-svc-hppnb May 25 21:41:12.170: INFO: Got endpoints: latency-svc-hppnb [958.880922ms] May 25 21:41:12.239: INFO: Created: latency-svc-5rnfz May 25 21:41:12.257: INFO: Got endpoints: latency-svc-5rnfz [1.01102586s] May 25 21:41:12.302: INFO: Created: latency-svc-xqf2c May 25 21:41:12.305: INFO: Got endpoints: latency-svc-xqf2c [1.014170846s] May 25 21:41:12.334: INFO: Created: latency-svc-5z5kq May 25 21:41:12.348: INFO: Got endpoints: latency-svc-5z5kq [976.814125ms] May 25 21:41:12.373: INFO: Created: latency-svc-g6hfg May 25 21:41:12.384: INFO: Got endpoints: latency-svc-g6hfg [901.256355ms] May 25 21:41:12.460: INFO: Created: latency-svc-pb65p May 25 21:41:12.468: INFO: Got endpoints: latency-svc-pb65p [944.944464ms] May 25 21:41:12.490: INFO: Created: latency-svc-5c4t4 May 25 21:41:12.517: INFO: Got endpoints: latency-svc-5c4t4 [957.239878ms] May 25 21:41:12.550: INFO: Created: latency-svc-dsj6r May 25 21:41:12.589: INFO: Got endpoints: latency-svc-dsj6r [967.240435ms] May 25 21:41:12.630: INFO: Created: latency-svc-5c2f6 May 25 21:41:12.643: INFO: Got endpoints: latency-svc-5c2f6 [940.671163ms] May 25 21:41:12.667: INFO: Created: latency-svc-v4887 May 25 21:41:12.681: INFO: Got endpoints: latency-svc-v4887 [892.484165ms] May 25 21:41:12.727: INFO: Created: latency-svc-b78f8 May 25 21:41:12.746: INFO: Got endpoints: latency-svc-b78f8 [915.191813ms] May 25 21:41:12.772: INFO: Created: latency-svc-nfq8q May 25 21:41:12.788: INFO: Got endpoints: latency-svc-nfq8q [887.955855ms] May 25 21:41:12.827: INFO: Created: latency-svc-f4lvh May 25 21:41:12.910: INFO: Got endpoints: latency-svc-f4lvh [964.832537ms] May 25 21:41:12.948: INFO: Created: latency-svc-d94pn May 25 21:41:12.968: INFO: Got endpoints: latency-svc-d94pn [924.156698ms] May 25 21:41:13.000: INFO: Created: latency-svc-fdtpv May 25 21:41:13.056: INFO: Got endpoints: latency-svc-fdtpv [949.188417ms] May 25 21:41:13.108: INFO: Created: latency-svc-nktk7 May 25 21:41:13.125: INFO: Got endpoints: latency-svc-nktk7 [955.584578ms] May 25 21:41:13.153: INFO: Created: latency-svc-9fmjt May 25 21:41:13.194: INFO: Got endpoints: latency-svc-9fmjt [936.605527ms] May 25 21:41:13.224: INFO: Created: latency-svc-gxg7g May 25 21:41:13.239: INFO: Got endpoints: latency-svc-gxg7g [933.657437ms] May 25 21:41:13.270: INFO: Created: latency-svc-2br8h May 25 21:41:13.287: INFO: Got endpoints: latency-svc-2br8h [939.030536ms] May 25 21:41:13.326: INFO: Created: latency-svc-th29w May 25 21:41:13.330: INFO: Got endpoints: latency-svc-th29w [945.318658ms] May 25 21:41:13.354: INFO: Created: latency-svc-fd7rt May 25 21:41:13.366: INFO: Got endpoints: latency-svc-fd7rt [897.474016ms] May 25 21:41:13.386: INFO: Created: latency-svc-6cffm May 25 21:41:13.408: INFO: Got endpoints: latency-svc-6cffm [891.301451ms] May 25 21:41:13.463: INFO: Created: latency-svc-jgzf5 May 25 21:41:13.498: INFO: Got endpoints: latency-svc-jgzf5 [908.585386ms] May 25 21:41:13.498: INFO: Created: latency-svc-4pd5b May 25 21:41:13.516: INFO: Got endpoints: latency-svc-4pd5b [872.971984ms] May 25 21:41:13.534: INFO: Created: latency-svc-9snlp May 25 21:41:13.558: INFO: Got endpoints: latency-svc-9snlp [877.536295ms] May 25 21:41:13.613: INFO: Created: latency-svc-sjfgv May 25 21:41:13.638: INFO: Created: latency-svc-fdx6x May 25 21:41:13.638: INFO: Got endpoints: latency-svc-sjfgv [891.778471ms] May 25 21:41:13.654: INFO: Got endpoints: latency-svc-fdx6x [866.143526ms] May 25 21:41:13.686: INFO: Created: latency-svc-kj9tl May 25 21:41:13.703: INFO: Got endpoints: latency-svc-kj9tl [793.108653ms] May 25 21:41:13.757: INFO: Created: latency-svc-p8cdh May 25 21:41:13.760: INFO: Got endpoints: latency-svc-p8cdh [791.410638ms] May 25 21:41:13.830: INFO: Created: latency-svc-6cq2g May 25 21:41:13.847: INFO: Got endpoints: latency-svc-6cq2g [790.889673ms] May 25 21:41:13.894: INFO: Created: latency-svc-qnswm May 25 21:41:13.897: INFO: Got endpoints: latency-svc-qnswm [771.906825ms] May 25 21:41:13.930: INFO: Created: latency-svc-h5mg9 May 25 21:41:13.945: INFO: Got endpoints: latency-svc-h5mg9 [751.054363ms] May 25 21:41:14.039: INFO: Created: latency-svc-5dksb May 25 21:41:14.053: INFO: Got endpoints: latency-svc-5dksb [814.179274ms] May 25 21:41:14.074: INFO: Created: latency-svc-qs9n2 May 25 21:41:14.089: INFO: Got endpoints: latency-svc-qs9n2 [802.176423ms] May 25 21:41:14.135: INFO: Created: latency-svc-zxvb7 May 25 21:41:14.218: INFO: Got endpoints: latency-svc-zxvb7 [888.158411ms] May 25 21:41:14.220: INFO: Created: latency-svc-6n8kw May 25 21:41:14.233: INFO: Got endpoints: latency-svc-6n8kw [867.545581ms] May 25 21:41:14.302: INFO: Created: latency-svc-xmlcc May 25 21:41:14.318: INFO: Got endpoints: latency-svc-xmlcc [909.211242ms] May 25 21:41:14.375: INFO: Created: latency-svc-5ffdx May 25 21:41:14.384: INFO: Got endpoints: latency-svc-5ffdx [885.789951ms] May 25 21:41:14.407: INFO: Created: latency-svc-lh9q4 May 25 21:41:14.414: INFO: Got endpoints: latency-svc-lh9q4 [897.251664ms] May 25 21:41:14.435: INFO: Created: latency-svc-28tdb May 25 21:41:14.450: INFO: Got endpoints: latency-svc-28tdb [891.926977ms] May 25 21:41:14.547: INFO: Created: latency-svc-pd9m2 May 25 21:41:14.550: INFO: Got endpoints: latency-svc-pd9m2 [912.422243ms] May 25 21:41:14.579: INFO: Created: latency-svc-nzhvk May 25 21:41:14.610: INFO: Got endpoints: latency-svc-nzhvk [955.424439ms] May 25 21:41:14.640: INFO: Created: latency-svc-9cj4q May 25 21:41:14.679: INFO: Got endpoints: latency-svc-9cj4q [975.585267ms] May 25 21:41:14.691: INFO: Created: latency-svc-hhrwq May 25 21:41:14.710: INFO: Got endpoints: latency-svc-hhrwq [949.730146ms] May 25 21:41:14.728: INFO: Created: latency-svc-bpnsd May 25 21:41:14.745: INFO: Got endpoints: latency-svc-bpnsd [898.454887ms] May 25 21:41:14.764: INFO: Created: latency-svc-w7lnr May 25 21:41:14.775: INFO: Got endpoints: latency-svc-w7lnr [878.166776ms] May 25 21:41:14.823: INFO: Created: latency-svc-pfhvn May 25 21:41:14.830: INFO: Got endpoints: latency-svc-pfhvn [884.952552ms] May 25 21:41:14.861: INFO: Created: latency-svc-6tvsk May 25 21:41:14.872: INFO: Got endpoints: latency-svc-6tvsk [819.368364ms] May 25 21:41:14.903: INFO: Created: latency-svc-djxx7 May 25 21:41:14.914: INFO: Got endpoints: latency-svc-djxx7 [825.117285ms] May 25 21:41:14.990: INFO: Created: latency-svc-phxjt May 25 21:41:15.010: INFO: Got endpoints: latency-svc-phxjt [792.072367ms] May 25 21:41:15.034: INFO: Created: latency-svc-bbdcq May 25 21:41:15.164: INFO: Got endpoints: latency-svc-bbdcq [930.177744ms] May 25 21:41:15.171: INFO: Created: latency-svc-5gxf2 May 25 21:41:15.179: INFO: Got endpoints: latency-svc-5gxf2 [861.278135ms] May 25 21:41:15.207: INFO: Created: latency-svc-fm2jt May 25 21:41:15.237: INFO: Got endpoints: latency-svc-fm2jt [853.625556ms] May 25 21:41:15.331: INFO: Created: latency-svc-h75ng May 25 21:41:15.336: INFO: Got endpoints: latency-svc-h75ng [921.87586ms] May 25 21:41:15.359: INFO: Created: latency-svc-4w5kt May 25 21:41:15.393: INFO: Got endpoints: latency-svc-4w5kt [943.165938ms] May 25 21:41:15.430: INFO: Created: latency-svc-8b8fx May 25 21:41:15.494: INFO: Got endpoints: latency-svc-8b8fx [943.312075ms] May 25 21:41:15.495: INFO: Created: latency-svc-wtcsl May 25 21:41:15.502: INFO: Got endpoints: latency-svc-wtcsl [892.553134ms] May 25 21:41:15.521: INFO: Created: latency-svc-t5bph May 25 21:41:15.539: INFO: Got endpoints: latency-svc-t5bph [860.617458ms] May 25 21:41:15.557: INFO: Created: latency-svc-t8576 May 25 21:41:15.576: INFO: Got endpoints: latency-svc-t8576 [866.632054ms] May 25 21:41:15.643: INFO: Created: latency-svc-f4s4b May 25 21:41:15.646: INFO: Got endpoints: latency-svc-f4s4b [900.531418ms] May 25 21:41:15.688: INFO: Created: latency-svc-jgvcl May 25 21:41:15.714: INFO: Got endpoints: latency-svc-jgvcl [938.542803ms] May 25 21:41:15.781: INFO: Created: latency-svc-fwhxm May 25 21:41:15.783: INFO: Got endpoints: latency-svc-fwhxm [953.483376ms] May 25 21:41:15.827: INFO: Created: latency-svc-wj4cl May 25 21:41:15.840: INFO: Got endpoints: latency-svc-wj4cl [967.699013ms] May 25 21:41:15.869: INFO: Created: latency-svc-gt92q May 25 21:41:15.931: INFO: Got endpoints: latency-svc-gt92q [1.0164596s] May 25 21:41:15.934: INFO: Created: latency-svc-6p4lw May 25 21:41:15.942: INFO: Got endpoints: latency-svc-6p4lw [932.336037ms] May 25 21:41:15.963: INFO: Created: latency-svc-bwr9k May 25 21:41:15.972: INFO: Got endpoints: latency-svc-bwr9k [808.72673ms] May 25 21:41:16.001: INFO: Created: latency-svc-wk4m2 May 25 21:41:16.086: INFO: Got endpoints: latency-svc-wk4m2 [907.12347ms] May 25 21:41:16.107: INFO: Created: latency-svc-vrj9x May 25 21:41:16.123: INFO: Got endpoints: latency-svc-vrj9x [885.341291ms] May 25 21:41:16.149: INFO: Created: latency-svc-4pdmr May 25 21:41:16.165: INFO: Got endpoints: latency-svc-4pdmr [829.38559ms] May 25 21:41:16.225: INFO: Created: latency-svc-vs4ff May 25 21:41:16.227: INFO: Got endpoints: latency-svc-vs4ff [833.653861ms] May 25 21:41:16.253: INFO: Created: latency-svc-ww6fs May 25 21:41:16.268: INFO: Got endpoints: latency-svc-ww6fs [773.948312ms] May 25 21:41:16.301: INFO: Created: latency-svc-qkxsv May 25 21:41:16.316: INFO: Got endpoints: latency-svc-qkxsv [813.479561ms] May 25 21:41:16.362: INFO: Created: latency-svc-lr9g9 May 25 21:41:16.365: INFO: Got endpoints: latency-svc-lr9g9 [825.584267ms] May 25 21:41:16.413: INFO: Created: latency-svc-bg8bg May 25 21:41:16.511: INFO: Got endpoints: latency-svc-bg8bg [934.945394ms] May 25 21:41:16.523: INFO: Created: latency-svc-cxb9m May 25 21:41:16.539: INFO: Got endpoints: latency-svc-cxb9m [892.54343ms] May 25 21:41:16.563: INFO: Created: latency-svc-jlqnj May 25 21:41:16.594: INFO: Got endpoints: latency-svc-jlqnj [880.122004ms] May 25 21:41:16.662: INFO: Created: latency-svc-kbxkb May 25 21:41:16.685: INFO: Got endpoints: latency-svc-kbxkb [901.809154ms] May 25 21:41:16.685: INFO: Created: latency-svc-855tf May 25 21:41:16.701: INFO: Got endpoints: latency-svc-855tf [861.279746ms] May 25 21:41:16.721: INFO: Created: latency-svc-6s5wt May 25 21:41:16.738: INFO: Got endpoints: latency-svc-6s5wt [806.938205ms] May 25 21:41:16.758: INFO: Created: latency-svc-pr654 May 25 21:41:16.793: INFO: Got endpoints: latency-svc-pr654 [850.631479ms] May 25 21:41:16.803: INFO: Created: latency-svc-tlczb May 25 21:41:16.816: INFO: Got endpoints: latency-svc-tlczb [843.134605ms] May 25 21:41:16.839: INFO: Created: latency-svc-67gwj May 25 21:41:16.871: INFO: Got endpoints: latency-svc-67gwj [784.620209ms] May 25 21:41:16.943: INFO: Created: latency-svc-mrlqd May 25 21:41:16.945: INFO: Got endpoints: latency-svc-mrlqd [822.717137ms] May 25 21:41:17.013: INFO: Created: latency-svc-7cz9f May 25 21:41:17.033: INFO: Got endpoints: latency-svc-7cz9f [868.229628ms] May 25 21:41:17.075: INFO: Created: latency-svc-b82df May 25 21:41:17.078: INFO: Got endpoints: latency-svc-b82df [850.986287ms] May 25 21:41:17.135: INFO: Created: latency-svc-z28nz May 25 21:41:17.147: INFO: Got endpoints: latency-svc-z28nz [878.986234ms] May 25 21:41:17.171: INFO: Created: latency-svc-cbsdr May 25 21:41:17.218: INFO: Got endpoints: latency-svc-cbsdr [901.650279ms] May 25 21:41:17.222: INFO: Created: latency-svc-fj4mv May 25 21:41:17.231: INFO: Got endpoints: latency-svc-fj4mv [866.502085ms] May 25 21:41:17.253: INFO: Created: latency-svc-86fxl May 25 21:41:17.268: INFO: Got endpoints: latency-svc-86fxl [756.136919ms] May 25 21:41:17.297: INFO: Created: latency-svc-gpjt4 May 25 21:41:17.310: INFO: Got endpoints: latency-svc-gpjt4 [771.17546ms] May 25 21:41:17.368: INFO: Created: latency-svc-n7jpb May 25 21:41:17.378: INFO: Got endpoints: latency-svc-n7jpb [783.978479ms] May 25 21:41:17.415: INFO: Created: latency-svc-7wtwq May 25 21:41:17.439: INFO: Got endpoints: latency-svc-7wtwq [753.64129ms] May 25 21:41:17.517: INFO: Created: latency-svc-c6sks May 25 21:41:17.525: INFO: Got endpoints: latency-svc-c6sks [823.128022ms] May 25 21:41:17.555: INFO: Created: latency-svc-56wpw May 25 21:41:17.565: INFO: Got endpoints: latency-svc-56wpw [827.219554ms] May 25 21:41:17.595: INFO: Created: latency-svc-w87zf May 25 21:41:17.607: INFO: Got endpoints: latency-svc-w87zf [814.380604ms] May 25 21:41:17.661: INFO: Created: latency-svc-jvl4d May 25 21:41:17.668: INFO: Got endpoints: latency-svc-jvl4d [851.972337ms] May 25 21:41:17.704: INFO: Created: latency-svc-7g8d7 May 25 21:41:17.716: INFO: Got endpoints: latency-svc-7g8d7 [845.738595ms] May 25 21:41:17.759: INFO: Created: latency-svc-dl5rf May 25 21:41:17.823: INFO: Got endpoints: latency-svc-dl5rf [877.495856ms] May 25 21:41:17.831: INFO: Created: latency-svc-tcfnv May 25 21:41:17.847: INFO: Got endpoints: latency-svc-tcfnv [813.206735ms] May 25 21:41:17.877: INFO: Created: latency-svc-p8krh May 25 21:41:17.891: INFO: Got endpoints: latency-svc-p8krh [812.603992ms] May 25 21:41:17.913: INFO: Created: latency-svc-5jlqg May 25 21:41:17.966: INFO: Got endpoints: latency-svc-5jlqg [819.702357ms] May 25 21:41:17.993: INFO: Created: latency-svc-dbjpq May 25 21:41:18.012: INFO: Got endpoints: latency-svc-dbjpq [794.091418ms] May 25 21:41:18.057: INFO: Created: latency-svc-62279 May 25 21:41:18.164: INFO: Got endpoints: latency-svc-62279 [932.866929ms] May 25 21:41:18.167: INFO: Created: latency-svc-5hmz9 May 25 21:41:18.197: INFO: Got endpoints: latency-svc-5hmz9 [929.010591ms] May 25 21:41:18.251: INFO: Created: latency-svc-k5nd8 May 25 21:41:18.313: INFO: Got endpoints: latency-svc-k5nd8 [1.00335468s] May 25 21:41:18.315: INFO: Created: latency-svc-lnk2f May 25 21:41:18.324: INFO: Got endpoints: latency-svc-lnk2f [945.502528ms] May 25 21:41:18.346: INFO: Created: latency-svc-cjtgf May 25 21:41:18.361: INFO: Got endpoints: latency-svc-cjtgf [921.731631ms] May 25 21:41:18.393: INFO: Created: latency-svc-29mr9 May 25 21:41:18.475: INFO: Got endpoints: latency-svc-29mr9 [950.64214ms] May 25 21:41:18.509: INFO: Created: latency-svc-gpsd9 May 25 21:41:18.522: INFO: Got endpoints: latency-svc-gpsd9 [957.250603ms] May 25 21:41:18.544: INFO: Created: latency-svc-8l88n May 25 21:41:18.559: INFO: Got endpoints: latency-svc-8l88n [951.256211ms] May 25 21:41:18.626: INFO: Created: latency-svc-k97gd May 25 21:41:18.629: INFO: Got endpoints: latency-svc-k97gd [961.102558ms] May 25 21:41:18.665: INFO: Created: latency-svc-74sg4 May 25 21:41:18.679: INFO: Got endpoints: latency-svc-74sg4 [962.677215ms] May 25 21:41:18.701: INFO: Created: latency-svc-4rv2l May 25 21:41:18.719: INFO: Got endpoints: latency-svc-4rv2l [895.558307ms] May 25 21:41:18.778: INFO: Created: latency-svc-sffsf May 25 21:41:18.782: INFO: Got endpoints: latency-svc-sffsf [934.880647ms] May 25 21:41:18.807: INFO: Created: latency-svc-qnj96 May 25 21:41:18.824: INFO: Got endpoints: latency-svc-qnj96 [933.511995ms] May 25 21:41:18.844: INFO: Created: latency-svc-jfs59 May 25 21:41:18.861: INFO: Got endpoints: latency-svc-jfs59 [894.331133ms] May 25 21:41:18.932: INFO: Created: latency-svc-fxgjt May 25 21:41:18.935: INFO: Got endpoints: latency-svc-fxgjt [923.15293ms] May 25 21:41:19.001: INFO: Created: latency-svc-75jzf May 25 21:41:19.068: INFO: Got endpoints: latency-svc-75jzf [903.540608ms] May 25 21:41:19.089: INFO: Created: latency-svc-bd97k May 25 21:41:19.113: INFO: Got endpoints: latency-svc-bd97k [916.6538ms] May 25 21:41:19.157: INFO: Created: latency-svc-j5jf8 May 25 21:41:19.218: INFO: Got endpoints: latency-svc-j5jf8 [904.935894ms] May 25 21:41:19.251: INFO: Created: latency-svc-q9nzv May 25 21:41:19.276: INFO: Got endpoints: latency-svc-q9nzv [951.815301ms] May 25 21:41:19.292: INFO: Created: latency-svc-6jwcr May 25 21:41:19.361: INFO: Got endpoints: latency-svc-6jwcr [1.000544481s] May 25 21:41:19.363: INFO: Created: latency-svc-9s7k2 May 25 21:41:19.371: INFO: Got endpoints: latency-svc-9s7k2 [896.196242ms] May 25 21:41:19.397: INFO: Created: latency-svc-pbtzx May 25 21:41:19.424: INFO: Got endpoints: latency-svc-pbtzx [901.701629ms] May 25 21:41:19.448: INFO: Created: latency-svc-lbhc9 May 25 21:41:19.494: INFO: Got endpoints: latency-svc-lbhc9 [934.832013ms] May 25 21:41:19.509: INFO: Created: latency-svc-p4sjk May 25 21:41:19.522: INFO: Got endpoints: latency-svc-p4sjk [893.382655ms] May 25 21:41:19.540: INFO: Created: latency-svc-69lwt May 25 21:41:19.553: INFO: Got endpoints: latency-svc-69lwt [873.882105ms] May 25 21:41:19.577: INFO: Created: latency-svc-4spwz May 25 21:41:19.589: INFO: Got endpoints: latency-svc-4spwz [870.747019ms] May 25 21:41:19.638: INFO: Created: latency-svc-chxv7 May 25 21:41:19.659: INFO: Got endpoints: latency-svc-chxv7 [877.189458ms] May 25 21:41:19.708: INFO: Created: latency-svc-xmb97 May 25 21:41:19.734: INFO: Got endpoints: latency-svc-xmb97 [910.076248ms] May 25 21:41:19.799: INFO: Created: latency-svc-kmm4k May 25 21:41:19.806: INFO: Got endpoints: latency-svc-kmm4k [944.951843ms] May 25 21:41:19.828: INFO: Created: latency-svc-dx7q7 May 25 21:41:19.842: INFO: Got endpoints: latency-svc-dx7q7 [907.19548ms] May 25 21:41:19.865: INFO: Created: latency-svc-c2dk9 May 25 21:41:19.878: INFO: Got endpoints: latency-svc-c2dk9 [810.222209ms] May 25 21:41:19.942: INFO: Created: latency-svc-phwhn May 25 21:41:19.946: INFO: Got endpoints: latency-svc-phwhn [832.321807ms] May 25 21:41:19.995: INFO: Created: latency-svc-22cl8 May 25 21:41:20.038: INFO: Got endpoints: latency-svc-22cl8 [819.770493ms] May 25 21:41:20.099: INFO: Created: latency-svc-jqwwq May 25 21:41:20.110: INFO: Got endpoints: latency-svc-jqwwq [834.174862ms] May 25 21:41:20.133: INFO: Created: latency-svc-4xgr7 May 25 21:41:20.150: INFO: Got endpoints: latency-svc-4xgr7 [788.058116ms] May 25 21:41:20.175: INFO: Created: latency-svc-hcm62 May 25 21:41:20.192: INFO: Got endpoints: latency-svc-hcm62 [820.198735ms] May 25 21:41:20.260: INFO: Created: latency-svc-f5x6f May 25 21:41:20.269: INFO: Got endpoints: latency-svc-f5x6f [845.244238ms] May 25 21:41:20.290: INFO: Created: latency-svc-xxnnh May 25 21:41:20.300: INFO: Got endpoints: latency-svc-xxnnh [806.198531ms] May 25 21:41:20.320: INFO: Created: latency-svc-7p929 May 25 21:41:20.331: INFO: Got endpoints: latency-svc-7p929 [808.297414ms] May 25 21:41:20.355: INFO: Created: latency-svc-qphz7 May 25 21:41:20.434: INFO: Got endpoints: latency-svc-qphz7 [880.793533ms] May 25 21:41:20.436: INFO: Created: latency-svc-g5q8w May 25 21:41:20.451: INFO: Got endpoints: latency-svc-g5q8w [861.025926ms] May 25 21:41:20.524: INFO: Created: latency-svc-65t8j May 25 21:41:20.595: INFO: Got endpoints: latency-svc-65t8j [936.392396ms] May 25 21:41:20.619: INFO: Created: latency-svc-s9r55 May 25 21:41:20.631: INFO: Got endpoints: latency-svc-s9r55 [896.673537ms] May 25 21:41:20.631: INFO: Latencies: [44.902953ms 86.25974ms 123.937098ms 189.860965ms 236.575559ms 321.849028ms 368.248792ms 397.848533ms 493.89354ms 517.936895ms 559.849102ms 631.475405ms 667.998324ms 730.213797ms 736.844706ms 740.253556ms 747.761487ms 749.143359ms 750.78047ms 751.054363ms 753.64129ms 753.702413ms 756.136919ms 757.260732ms 771.17546ms 771.906825ms 772.240379ms 773.948312ms 783.978479ms 784.620209ms 788.058116ms 790.889673ms 791.410638ms 792.072367ms 793.108653ms 794.091418ms 797.458128ms 802.176423ms 803.755469ms 806.198531ms 806.938205ms 808.297414ms 808.72673ms 810.222209ms 812.603992ms 813.206735ms 813.479561ms 814.179274ms 814.380604ms 815.040874ms 819.368364ms 819.702357ms 819.770493ms 820.198735ms 822.717137ms 823.128022ms 825.117285ms 825.584267ms 827.219554ms 829.38559ms 832.321807ms 833.653861ms 834.174862ms 843.134605ms 845.244238ms 845.738595ms 849.407973ms 850.310452ms 850.631479ms 850.986287ms 851.972337ms 853.625556ms 860.617458ms 861.025926ms 861.278135ms 861.279746ms 861.285621ms 862.458979ms 866.143526ms 866.502085ms 866.632054ms 867.088901ms 867.545581ms 868.118176ms 868.229628ms 870.747019ms 872.971984ms 873.882105ms 877.189458ms 877.495856ms 877.536295ms 878.166776ms 878.986234ms 880.122004ms 880.793533ms 884.952552ms 885.341291ms 885.789951ms 887.955855ms 888.158411ms 891.301451ms 891.5561ms 891.778471ms 891.926977ms 892.484165ms 892.54343ms 892.553134ms 893.382655ms 894.331133ms 895.558307ms 896.196242ms 896.673537ms 897.251664ms 897.474016ms 898.454887ms 898.660762ms 900.531418ms 901.256355ms 901.650279ms 901.701629ms 901.809154ms 903.540608ms 904.935894ms 907.12347ms 907.19548ms 908.585386ms 909.211242ms 910.076248ms 912.422243ms 915.191813ms 916.6538ms 921.731631ms 921.87586ms 923.15293ms 924.156698ms 929.010591ms 930.177744ms 932.336037ms 932.866929ms 933.511995ms 933.657437ms 934.832013ms 934.880647ms 934.945394ms 936.392396ms 936.605527ms 938.542803ms 939.030536ms 940.671163ms 943.165938ms 943.312075ms 944.944464ms 944.951843ms 945.318658ms 945.502528ms 949.188417ms 949.730146ms 950.64214ms 951.256211ms 951.815301ms 952.268794ms 953.483376ms 955.424439ms 955.584578ms 957.239878ms 957.250603ms 958.880922ms 961.102558ms 961.103097ms 962.677215ms 964.832537ms 967.240435ms 967.699013ms 972.632868ms 975.585267ms 976.814125ms 986.366178ms 991.865407ms 1.000544481s 1.002700294s 1.00335468s 1.01102586s 1.014170846s 1.015955097s 1.0164596s 1.01762675s 1.022610946s 1.025550121s 1.04021481s 1.044479606s 1.047057703s 1.064906688s 1.065037317s 1.06828219s 1.068779317s 1.076715791s 1.086234827s 1.097141345s 1.123911594s 1.125269595s] May 25 21:41:20.631: INFO: 50 %ile: 891.301451ms May 25 21:41:20.631: INFO: 90 %ile: 1.00335468s May 25 21:41:20.631: INFO: 99 %ile: 1.123911594s May 25 21:41:20.631: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:41:20.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-8606" for this suite. • [SLOW TEST:15.197 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":278,"completed":98,"skipped":1426,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:41:20.653: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 25 21:41:20.735: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8823ae69-e192-41fe-86e5-5ecebcc245de" in namespace "projected-4365" to be "success or failure" May 25 21:41:20.757: INFO: Pod "downwardapi-volume-8823ae69-e192-41fe-86e5-5ecebcc245de": Phase="Pending", Reason="", readiness=false. Elapsed: 21.891816ms May 25 21:41:22.761: INFO: Pod "downwardapi-volume-8823ae69-e192-41fe-86e5-5ecebcc245de": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026363653s May 25 21:41:24.766: INFO: Pod "downwardapi-volume-8823ae69-e192-41fe-86e5-5ecebcc245de": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031149018s STEP: Saw pod success May 25 21:41:24.766: INFO: Pod "downwardapi-volume-8823ae69-e192-41fe-86e5-5ecebcc245de" satisfied condition "success or failure" May 25 21:41:24.770: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-8823ae69-e192-41fe-86e5-5ecebcc245de container client-container: STEP: delete the pod May 25 21:41:24.839: INFO: Waiting for pod downwardapi-volume-8823ae69-e192-41fe-86e5-5ecebcc245de to disappear May 25 21:41:24.858: INFO: Pod downwardapi-volume-8823ae69-e192-41fe-86e5-5ecebcc245de no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:41:24.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4365" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":99,"skipped":1444,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:41:24.872: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-4825, will wait for the garbage collector to delete the pods May 25 21:41:31.006: INFO: Deleting Job.batch foo took: 7.236904ms May 25 21:41:31.406: INFO: Terminating Job.batch foo pods took: 400.220556ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:42:09.513: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-4825" for this suite. • [SLOW TEST:44.650 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":278,"completed":100,"skipped":1476,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:42:09.522: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 25 21:42:09.626: INFO: Creating deployment "test-recreate-deployment" May 25 21:42:09.640: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 May 25 21:42:09.652: INFO: deployment "test-recreate-deployment" doesn't have the required revision set May 25 21:42:11.680: INFO: Waiting deployment "test-recreate-deployment" to complete May 25 21:42:11.683: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726039729, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726039729, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726039729, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726039729, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)} May 25 21:42:13.686: INFO: Triggering a new rollout for deployment "test-recreate-deployment" May 25 21:42:13.694: INFO: Updating deployment test-recreate-deployment May 25 21:42:13.694: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 May 25 21:42:14.249: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-3751 /apis/apps/v1/namespaces/deployment-3751/deployments/test-recreate-deployment 085f8b83-5597-4cff-b69d-c337d2a77b13 19121035 2 2020-05-25 21:42:09 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002cf6798 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-05-25 21:42:13 +0000 UTC,LastTransitionTime:2020-05-25 21:42:13 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-5f94c574ff" is progressing.,LastUpdateTime:2020-05-25 21:42:13 +0000 UTC,LastTransitionTime:2020-05-25 21:42:09 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} May 25 21:42:14.253: INFO: New ReplicaSet "test-recreate-deployment-5f94c574ff" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-5f94c574ff deployment-3751 /apis/apps/v1/namespaces/deployment-3751/replicasets/test-recreate-deployment-5f94c574ff 4a6827ef-25cb-4fe4-8a06-acb260de19b5 19121034 1 2020-05-25 21:42:13 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 085f8b83-5597-4cff-b69d-c337d2a77b13 0xc0009f30c7 0xc0009f30c8}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5f94c574ff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0009f3128 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 25 21:42:14.253: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": May 25 21:42:14.253: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-799c574856 deployment-3751 /apis/apps/v1/namespaces/deployment-3751/replicasets/test-recreate-deployment-799c574856 de156e65-8c6d-49d7-86f1-308857d0d86a 19121026 2 2020-05-25 21:42:09 +0000 UTC map[name:sample-pod-3 pod-template-hash:799c574856] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 085f8b83-5597-4cff-b69d-c337d2a77b13 0xc0009f31d7 0xc0009f31d8}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 799c574856,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:799c574856] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0009f32e8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 25 21:42:14.381: INFO: Pod "test-recreate-deployment-5f94c574ff-tgrst" is not available: &Pod{ObjectMeta:{test-recreate-deployment-5f94c574ff-tgrst test-recreate-deployment-5f94c574ff- deployment-3751 /api/v1/namespaces/deployment-3751/pods/test-recreate-deployment-5f94c574ff-tgrst 351f3b9b-60a3-465b-9658-e01281183e55 19121039 0 2020-05-25 21:42:13 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [{apps/v1 ReplicaSet test-recreate-deployment-5f94c574ff 4a6827ef-25cb-4fe4-8a06-acb260de19b5 0xc0009f3ac7 0xc0009f3ac8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8zkxd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8zkxd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8zkxd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 21:42:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 21:42:14 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 21:42:14 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 21:42:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-05-25 21:42:14 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:42:14.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-3751" for this suite. •{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":101,"skipped":1494,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:42:14.390: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Ensuring resource quota status captures service creation STEP: Deleting a Service STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:42:25.628: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8980" for this suite. • [SLOW TEST:11.248 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":278,"completed":102,"skipped":1520,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:42:25.638: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir volume type on node default medium May 25 21:42:25.706: INFO: Waiting up to 5m0s for pod "pod-656f6f46-f94b-4026-83b4-8646dea11ab3" in namespace "emptydir-9153" to be "success or failure" May 25 21:42:25.710: INFO: Pod "pod-656f6f46-f94b-4026-83b4-8646dea11ab3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.169272ms May 25 21:42:27.715: INFO: Pod "pod-656f6f46-f94b-4026-83b4-8646dea11ab3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008206571s May 25 21:42:29.719: INFO: Pod "pod-656f6f46-f94b-4026-83b4-8646dea11ab3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012694342s STEP: Saw pod success May 25 21:42:29.719: INFO: Pod "pod-656f6f46-f94b-4026-83b4-8646dea11ab3" satisfied condition "success or failure" May 25 21:42:29.721: INFO: Trying to get logs from node jerma-worker pod pod-656f6f46-f94b-4026-83b4-8646dea11ab3 container test-container: STEP: delete the pod May 25 21:42:29.766: INFO: Waiting for pod pod-656f6f46-f94b-4026-83b4-8646dea11ab3 to disappear May 25 21:42:29.776: INFO: Pod pod-656f6f46-f94b-4026-83b4-8646dea11ab3 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:42:29.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9153" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":103,"skipped":1541,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:42:29.819: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:324 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the initial replication controller May 25 21:42:29.880: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7686' May 25 21:42:30.194: INFO: stderr: "" May 25 21:42:30.194: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 25 21:42:30.194: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7686' May 25 21:42:30.326: INFO: stderr: "" May 25 21:42:30.326: INFO: stdout: "update-demo-nautilus-k7j2x update-demo-nautilus-khh4j " May 25 21:42:30.326: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-k7j2x -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7686' May 25 21:42:30.448: INFO: stderr: "" May 25 21:42:30.448: INFO: stdout: "" May 25 21:42:30.448: INFO: update-demo-nautilus-k7j2x is created but not running May 25 21:42:35.448: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7686' May 25 21:42:35.561: INFO: stderr: "" May 25 21:42:35.561: INFO: stdout: "update-demo-nautilus-k7j2x update-demo-nautilus-khh4j " May 25 21:42:35.561: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-k7j2x -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7686' May 25 21:42:35.645: INFO: stderr: "" May 25 21:42:35.645: INFO: stdout: "true" May 25 21:42:35.645: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-k7j2x -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7686' May 25 21:42:35.738: INFO: stderr: "" May 25 21:42:35.738: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 25 21:42:35.738: INFO: validating pod update-demo-nautilus-k7j2x May 25 21:42:35.778: INFO: got data: { "image": "nautilus.jpg" } May 25 21:42:35.778: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 25 21:42:35.778: INFO: update-demo-nautilus-k7j2x is verified up and running May 25 21:42:35.778: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-khh4j -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7686' May 25 21:42:35.879: INFO: stderr: "" May 25 21:42:35.879: INFO: stdout: "true" May 25 21:42:35.879: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-khh4j -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7686' May 25 21:42:35.971: INFO: stderr: "" May 25 21:42:35.971: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 25 21:42:35.971: INFO: validating pod update-demo-nautilus-khh4j May 25 21:42:35.994: INFO: got data: { "image": "nautilus.jpg" } May 25 21:42:35.994: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 25 21:42:35.994: INFO: update-demo-nautilus-khh4j is verified up and running STEP: rolling-update to new replication controller May 25 21:42:35.996: INFO: scanned /root for discovery docs: May 25 21:42:35.996: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-7686' May 25 21:42:59.134: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" May 25 21:42:59.134: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. May 25 21:42:59.134: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7686' May 25 21:42:59.235: INFO: stderr: "" May 25 21:42:59.235: INFO: stdout: "update-demo-kitten-c85cg update-demo-kitten-qm79w update-demo-nautilus-k7j2x " STEP: Replicas for name=update-demo: expected=2 actual=3 May 25 21:43:04.236: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7686' May 25 21:43:04.345: INFO: stderr: "" May 25 21:43:04.345: INFO: stdout: "update-demo-kitten-c85cg update-demo-kitten-qm79w " May 25 21:43:04.345: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-c85cg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7686' May 25 21:43:04.431: INFO: stderr: "" May 25 21:43:04.431: INFO: stdout: "true" May 25 21:43:04.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-c85cg -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7686' May 25 21:43:04.529: INFO: stderr: "" May 25 21:43:04.529: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" May 25 21:43:04.529: INFO: validating pod update-demo-kitten-c85cg May 25 21:43:04.541: INFO: got data: { "image": "kitten.jpg" } May 25 21:43:04.541: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . May 25 21:43:04.541: INFO: update-demo-kitten-c85cg is verified up and running May 25 21:43:04.541: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-qm79w -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7686' May 25 21:43:04.640: INFO: stderr: "" May 25 21:43:04.640: INFO: stdout: "true" May 25 21:43:04.640: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-qm79w -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7686' May 25 21:43:04.730: INFO: stderr: "" May 25 21:43:04.730: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" May 25 21:43:04.730: INFO: validating pod update-demo-kitten-qm79w May 25 21:43:04.739: INFO: got data: { "image": "kitten.jpg" } May 25 21:43:04.739: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . May 25 21:43:04.739: INFO: update-demo-kitten-qm79w is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:43:04.739: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7686" for this suite. • [SLOW TEST:34.926 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:322 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller [Conformance]","total":278,"completed":104,"skipped":1588,"failed":0} SSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:43:04.745: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-a6d2c78b-8aae-440c-a1c2-aa8e2385a399 STEP: Creating a pod to test consume secrets May 25 21:43:04.816: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-ccf1dab3-c164-4177-aa65-23bddadaa38c" in namespace "projected-4712" to be "success or failure" May 25 21:43:04.821: INFO: Pod "pod-projected-secrets-ccf1dab3-c164-4177-aa65-23bddadaa38c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.131642ms May 25 21:43:06.825: INFO: Pod "pod-projected-secrets-ccf1dab3-c164-4177-aa65-23bddadaa38c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008341994s May 25 21:43:08.829: INFO: Pod "pod-projected-secrets-ccf1dab3-c164-4177-aa65-23bddadaa38c": Phase="Running", Reason="", readiness=true. Elapsed: 4.012716924s May 25 21:43:10.832: INFO: Pod "pod-projected-secrets-ccf1dab3-c164-4177-aa65-23bddadaa38c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015907169s STEP: Saw pod success May 25 21:43:10.832: INFO: Pod "pod-projected-secrets-ccf1dab3-c164-4177-aa65-23bddadaa38c" satisfied condition "success or failure" May 25 21:43:10.834: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-ccf1dab3-c164-4177-aa65-23bddadaa38c container projected-secret-volume-test: STEP: delete the pod May 25 21:43:10.852: INFO: Waiting for pod pod-projected-secrets-ccf1dab3-c164-4177-aa65-23bddadaa38c to disappear May 25 21:43:10.862: INFO: Pod pod-projected-secrets-ccf1dab3-c164-4177-aa65-23bddadaa38c no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:43:10.863: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4712" for this suite. • [SLOW TEST:6.123 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":105,"skipped":1591,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:43:10.869: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on tmpfs May 25 21:43:11.395: INFO: Waiting up to 5m0s for pod "pod-e7989b98-d528-4b48-ad32-1b1f15b3872b" in namespace "emptydir-6261" to be "success or failure" May 25 21:43:11.408: INFO: Pod "pod-e7989b98-d528-4b48-ad32-1b1f15b3872b": Phase="Pending", Reason="", readiness=false. Elapsed: 12.57565ms May 25 21:43:13.412: INFO: Pod "pod-e7989b98-d528-4b48-ad32-1b1f15b3872b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01692478s May 25 21:43:15.424: INFO: Pod "pod-e7989b98-d528-4b48-ad32-1b1f15b3872b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028886874s STEP: Saw pod success May 25 21:43:15.424: INFO: Pod "pod-e7989b98-d528-4b48-ad32-1b1f15b3872b" satisfied condition "success or failure" May 25 21:43:15.427: INFO: Trying to get logs from node jerma-worker2 pod pod-e7989b98-d528-4b48-ad32-1b1f15b3872b container test-container: STEP: delete the pod May 25 21:43:15.492: INFO: Waiting for pod pod-e7989b98-d528-4b48-ad32-1b1f15b3872b to disappear May 25 21:43:15.496: INFO: Pod pod-e7989b98-d528-4b48-ad32-1b1f15b3872b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:43:15.496: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6261" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":106,"skipped":1615,"failed":0} SSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:43:15.503: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 25 21:43:37.754: INFO: Container started at 2020-05-25 21:43:18 +0000 UTC, pod became ready at 2020-05-25 21:43:37 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:43:37.754: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8993" for this suite. • [SLOW TEST:22.255 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":278,"completed":107,"skipped":1621,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:43:37.759: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name secret-emptykey-test-6b2d19fc-d310-4ef2-91c3-66ce311957b7 [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:43:37.802: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9773" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":278,"completed":108,"skipped":1643,"failed":0} S ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:43:37.820: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change May 25 21:43:43.006: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:43:43.039: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-2368" for this suite. • [SLOW TEST:5.286 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":278,"completed":109,"skipped":1644,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:43:43.106: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 May 25 21:43:43.164: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 25 21:43:43.177: INFO: Waiting for terminating namespaces to be deleted... May 25 21:43:43.179: INFO: Logging pods the kubelet thinks is on node jerma-worker before test May 25 21:43:43.185: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 25 21:43:43.185: INFO: Container kindnet-cni ready: true, restart count 0 May 25 21:43:43.185: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 25 21:43:43.185: INFO: Container kube-proxy ready: true, restart count 0 May 25 21:43:43.185: INFO: pod-adoption-release-527bk from replicaset-2368 started at 2020-05-25 21:43:43 +0000 UTC (1 container statuses recorded) May 25 21:43:43.185: INFO: Container pod-adoption-release ready: false, restart count 0 May 25 21:43:43.185: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test May 25 21:43:43.265: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 25 21:43:43.265: INFO: Container kindnet-cni ready: true, restart count 0 May 25 21:43:43.265: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) May 25 21:43:43.265: INFO: Container kube-bench ready: false, restart count 0 May 25 21:43:43.265: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 25 21:43:43.265: INFO: Container kube-proxy ready: true, restart count 0 May 25 21:43:43.265: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) May 25 21:43:43.265: INFO: Container kube-hunter ready: false, restart count 0 May 25 21:43:43.265: INFO: test-webserver-a5a28fe2-e049-418c-97fc-5ecb3141ee31 from container-probe-8993 started at 2020-05-25 21:43:15 +0000 UTC (1 container statuses recorded) May 25 21:43:43.265: INFO: Container test-webserver ready: true, restart count 0 May 25 21:43:43.265: INFO: pod-adoption-release from replicaset-2368 started at 2020-05-25 21:43:37 +0000 UTC (1 container statuses recorded) May 25 21:43:43.265: INFO: Container pod-adoption-release ready: true, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-253bb645-16f8-429f-8f8a-2a6384130e76 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-253bb645-16f8-429f-8f8a-2a6384130e76 off the node jerma-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-253bb645-16f8-429f-8f8a-2a6384130e76 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:48:53.511: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-564" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:310.412 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":278,"completed":110,"skipped":1658,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:48:53.519: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-cac1cb77-9723-4885-af08-cbb4fc467420 STEP: Creating a pod to test consume configMaps May 25 21:48:53.627: INFO: Waiting up to 5m0s for pod "pod-configmaps-73a3abd2-90f7-4cad-b383-ffcf6d94685e" in namespace "configmap-8467" to be "success or failure" May 25 21:48:53.630: INFO: Pod "pod-configmaps-73a3abd2-90f7-4cad-b383-ffcf6d94685e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.22574ms May 25 21:48:55.674: INFO: Pod "pod-configmaps-73a3abd2-90f7-4cad-b383-ffcf6d94685e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046955947s May 25 21:48:57.678: INFO: Pod "pod-configmaps-73a3abd2-90f7-4cad-b383-ffcf6d94685e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.05133535s STEP: Saw pod success May 25 21:48:57.678: INFO: Pod "pod-configmaps-73a3abd2-90f7-4cad-b383-ffcf6d94685e" satisfied condition "success or failure" May 25 21:48:57.682: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-73a3abd2-90f7-4cad-b383-ffcf6d94685e container configmap-volume-test: STEP: delete the pod May 25 21:48:57.720: INFO: Waiting for pod pod-configmaps-73a3abd2-90f7-4cad-b383-ffcf6d94685e to disappear May 25 21:48:57.741: INFO: Pod pod-configmaps-73a3abd2-90f7-4cad-b383-ffcf6d94685e no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:48:57.741: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8467" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":111,"skipped":1723,"failed":0} ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:48:57.750: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 25 21:48:57.833: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8d631728-558a-4c1a-8b63-ab2b2310136c" in namespace "downward-api-1596" to be "success or failure" May 25 21:48:57.845: INFO: Pod "downwardapi-volume-8d631728-558a-4c1a-8b63-ab2b2310136c": Phase="Pending", Reason="", readiness=false. Elapsed: 11.955008ms May 25 21:48:59.848: INFO: Pod "downwardapi-volume-8d631728-558a-4c1a-8b63-ab2b2310136c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014828753s May 25 21:49:01.851: INFO: Pod "downwardapi-volume-8d631728-558a-4c1a-8b63-ab2b2310136c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018120989s STEP: Saw pod success May 25 21:49:01.851: INFO: Pod "downwardapi-volume-8d631728-558a-4c1a-8b63-ab2b2310136c" satisfied condition "success or failure" May 25 21:49:01.853: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-8d631728-558a-4c1a-8b63-ab2b2310136c container client-container: STEP: delete the pod May 25 21:49:01.913: INFO: Waiting for pod downwardapi-volume-8d631728-558a-4c1a-8b63-ab2b2310136c to disappear May 25 21:49:01.923: INFO: Pod downwardapi-volume-8d631728-558a-4c1a-8b63-ab2b2310136c no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:49:01.923: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1596" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":112,"skipped":1723,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:49:01.956: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods May 25 21:49:02.650: INFO: Pod name wrapped-volume-race-eced77f9-31b8-488f-a1a7-e7fc79a75bf5: Found 0 pods out of 5 May 25 21:49:07.658: INFO: Pod name wrapped-volume-race-eced77f9-31b8-488f-a1a7-e7fc79a75bf5: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-eced77f9-31b8-488f-a1a7-e7fc79a75bf5 in namespace emptydir-wrapper-8130, will wait for the garbage collector to delete the pods May 25 21:49:21.756: INFO: Deleting ReplicationController wrapped-volume-race-eced77f9-31b8-488f-a1a7-e7fc79a75bf5 took: 22.627031ms May 25 21:49:22.156: INFO: Terminating ReplicationController wrapped-volume-race-eced77f9-31b8-488f-a1a7-e7fc79a75bf5 pods took: 400.413815ms STEP: Creating RC which spawns configmap-volume pods May 25 21:49:29.815: INFO: Pod name wrapped-volume-race-b52bfb56-a2bc-4b2e-8850-1d6b1432caf7: Found 0 pods out of 5 May 25 21:49:34.823: INFO: Pod name wrapped-volume-race-b52bfb56-a2bc-4b2e-8850-1d6b1432caf7: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-b52bfb56-a2bc-4b2e-8850-1d6b1432caf7 in namespace emptydir-wrapper-8130, will wait for the garbage collector to delete the pods May 25 21:49:50.967: INFO: Deleting ReplicationController wrapped-volume-race-b52bfb56-a2bc-4b2e-8850-1d6b1432caf7 took: 8.514515ms May 25 21:49:51.367: INFO: Terminating ReplicationController wrapped-volume-race-b52bfb56-a2bc-4b2e-8850-1d6b1432caf7 pods took: 400.266168ms STEP: Creating RC which spawns configmap-volume pods May 25 21:50:00.315: INFO: Pod name wrapped-volume-race-ce939fea-c1ed-4d07-8247-00a42ac3f5f0: Found 0 pods out of 5 May 25 21:50:05.325: INFO: Pod name wrapped-volume-race-ce939fea-c1ed-4d07-8247-00a42ac3f5f0: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-ce939fea-c1ed-4d07-8247-00a42ac3f5f0 in namespace emptydir-wrapper-8130, will wait for the garbage collector to delete the pods May 25 21:50:17.540: INFO: Deleting ReplicationController wrapped-volume-race-ce939fea-c1ed-4d07-8247-00a42ac3f5f0 took: 17.936123ms May 25 21:50:17.941: INFO: Terminating ReplicationController wrapped-volume-race-ce939fea-c1ed-4d07-8247-00a42ac3f5f0 pods took: 400.724987ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:50:30.583: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-8130" for this suite. • [SLOW TEST:88.634 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":278,"completed":113,"skipped":1737,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:50:30.591: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 25 21:50:30.679: INFO: Creating deployment "webserver-deployment" May 25 21:50:30.683: INFO: Waiting for observed generation 1 May 25 21:50:33.149: INFO: Waiting for all required pods to come up May 25 21:50:33.153: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running May 25 21:50:43.556: INFO: Waiting for deployment "webserver-deployment" to complete May 25 21:50:43.570: INFO: Updating deployment "webserver-deployment" with a non-existent image May 25 21:50:43.594: INFO: Updating deployment webserver-deployment May 25 21:50:43.594: INFO: Waiting for observed generation 2 May 25 21:50:45.736: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 May 25 21:50:46.006: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 May 25 21:50:46.008: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas May 25 21:50:46.017: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 May 25 21:50:46.017: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 May 25 21:50:46.019: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas May 25 21:50:46.024: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas May 25 21:50:46.024: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 May 25 21:50:46.033: INFO: Updating deployment webserver-deployment May 25 21:50:46.033: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas May 25 21:50:46.334: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 May 25 21:50:48.893: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 May 25 21:50:49.056: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-9854 /apis/apps/v1/namespaces/deployment-9854/deployments/webserver-deployment fc99a090-e1be-467e-ba0d-2df965bff252 19124049 3 2020-05-25 21:50:30 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0048cd1d8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-05-25 21:50:46 +0000 UTC,LastTransitionTime:2020-05-25 21:50:46 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-c7997dcc8" is progressing.,LastUpdateTime:2020-05-25 21:50:46 +0000 UTC,LastTransitionTime:2020-05-25 21:50:30 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} May 25 21:50:49.118: INFO: New ReplicaSet "webserver-deployment-c7997dcc8" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-c7997dcc8 deployment-9854 /apis/apps/v1/namespaces/deployment-9854/replicasets/webserver-deployment-c7997dcc8 2b14827f-e5ed-4a61-a73c-c72f2990aabd 19124045 3 2020-05-25 21:50:43 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment fc99a090-e1be-467e-ba0d-2df965bff252 0xc0048cd6a7 0xc0048cd6a8}] [] []},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: c7997dcc8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0048cd718 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 25 21:50:49.118: INFO: All old ReplicaSets of Deployment "webserver-deployment": May 25 21:50:49.118: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-595b5b9587 deployment-9854 /apis/apps/v1/namespaces/deployment-9854/replicasets/webserver-deployment-595b5b9587 d56af1e4-c680-4043-88d4-c43961d259ab 19124029 3 2020-05-25 21:50:30 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment fc99a090-e1be-467e-ba0d-2df965bff252 0xc0048cd5e7 0xc0048cd5e8}] [] []},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 595b5b9587,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0048cd648 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} May 25 21:50:49.147: INFO: Pod "webserver-deployment-595b5b9587-8nddp" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-8nddp webserver-deployment-595b5b9587- deployment-9854 /api/v1/namespaces/deployment-9854/pods/webserver-deployment-595b5b9587-8nddp 9756e878-bad5-4051-a058-8c7f66ba0e01 19124043 0 2020-05-25 21:50:46 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 d56af1e4-c680-4043-88d4-c43961d259ab 0xc002cf7ee0 0xc002cf7ee1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4wnxn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4wnxn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4wnxn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 21:50:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 21:50:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 21:50:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 21:50:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-05-25 21:50:46 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 25 21:50:49.147: INFO: Pod "webserver-deployment-595b5b9587-9j7rf" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-9j7rf webserver-deployment-595b5b9587- deployment-9854 /api/v1/namespaces/deployment-9854/pods/webserver-deployment-595b5b9587-9j7rf 51b002bb-2862-49d0-b97b-e7e258a4c18e 19123714 0 2020-05-25 21:50:30 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 d56af1e4-c680-4043-88d4-c43961d259ab 0xc0050a20a7 0xc0050a20a8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4wnxn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4wnxn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4wnxn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 21:50:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 21:50:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 21:50:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 21:50:30 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.111,StartTime:2020-05-25 21:50:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-25 21:50:40 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://8cd6a78712f6afd3485e9c563c6546123099cee851b65a9bee74c8eb6e0e3ffc,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.111,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 25 21:50:49.147: INFO: Pod "webserver-deployment-595b5b9587-9slc5" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-9slc5 webserver-deployment-595b5b9587- deployment-9854 /api/v1/namespaces/deployment-9854/pods/webserver-deployment-595b5b9587-9slc5 f7fc47fc-f1bf-4f8c-a354-d002c84eca18 19123775 0 2020-05-25 21:50:30 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 d56af1e4-c680-4043-88d4-c43961d259ab 0xc0050a2227 0xc0050a2228}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4wnxn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4wnxn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4wnxn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 21:50:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 21:50:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 21:50:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 21:50:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.153,StartTime:2020-05-25 21:50:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-25 21:50:41 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://2df8d30de5b41bbcdb5498d0852779f968cba94d5749f1985dd3db0dbca2f68b,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.153,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 25 21:50:49.148: INFO: Pod "webserver-deployment-595b5b9587-b8rrz" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-b8rrz webserver-deployment-595b5b9587- deployment-9854 /api/v1/namespaces/deployment-9854/pods/webserver-deployment-595b5b9587-b8rrz 52150c97-eba1-4a29-b784-be7cf6ff2231 19124115 0 2020-05-25 21:50:46 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 d56af1e4-c680-4043-88d4-c43961d259ab 0xc0050a23a7 0xc0050a23a8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4wnxn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4wnxn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4wnxn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 21:50:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 21:50:47 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 21:50:47 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 21:50:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-05-25 21:50:47 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 25 21:50:49.148: INFO: Pod "webserver-deployment-595b5b9587-d5gmb" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-d5gmb webserver-deployment-595b5b9587- deployment-9854 /api/v1/namespaces/deployment-9854/pods/webserver-deployment-595b5b9587-d5gmb badff04c-cd6b-4f96-9807-24969bab563f 19124064 0 2020-05-25 21:50:46 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 d56af1e4-c680-4043-88d4-c43961d259ab 0xc0050a2507 0xc0050a2508}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4wnxn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4wnxn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4wnxn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 21:50:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 21:50:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 21:50:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 21:50:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-05-25 21:50:46 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 25 21:50:49.148: INFO: Pod "webserver-deployment-595b5b9587-dsj96" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-dsj96 webserver-deployment-595b5b9587- deployment-9854 /api/v1/namespaces/deployment-9854/pods/webserver-deployment-595b5b9587-dsj96 77924b7f-8a7d-41e9-8547-2492438da8fe 19123774 0 2020-05-25 21:50:30 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 d56af1e4-c680-4043-88d4-c43961d259ab 0xc0050a2667 0xc0050a2668}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4wnxn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4wnxn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4wnxn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 21:50:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 21:50:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 21:50:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 21:50:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.112,StartTime:2020-05-25 21:50:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-25 21:50:41 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://fbffeb17016faaae4d853e0bccc19eae00db9739a1bac8759910c75132c0e488,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.112,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 25 21:50:49.148: INFO: Pod "webserver-deployment-595b5b9587-fg7r7" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-fg7r7 webserver-deployment-595b5b9587- deployment-9854 /api/v1/namespaces/deployment-9854/pods/webserver-deployment-595b5b9587-fg7r7 c1083068-8f7c-4d4a-b894-f15da40f9dac 19124028 0 2020-05-25 21:50:46 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 d56af1e4-c680-4043-88d4-c43961d259ab 0xc0050a27e7 0xc0050a27e8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4wnxn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4wnxn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4wnxn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 21:50:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 21:50:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 21:50:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 21:50:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-05-25 21:50:46 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 25 21:50:49.149: INFO: Pod "webserver-deployment-595b5b9587-fm5lb" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-fm5lb webserver-deployment-595b5b9587- deployment-9854 /api/v1/namespaces/deployment-9854/pods/webserver-deployment-595b5b9587-fm5lb 3c631235-5fc5-4019-927b-8ed3d220000f 19124068 0 2020-05-25 21:50:46 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 d56af1e4-c680-4043-88d4-c43961d259ab 0xc0050a2947 0xc0050a2948}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4wnxn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4wnxn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4wnxn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 21:50:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 21:50:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 21:50:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 21:50:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-05-25 21:50:46 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 25 21:50:49.149: INFO: Pod "webserver-deployment-595b5b9587-glpnp" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-glpnp webserver-deployment-595b5b9587- deployment-9854 /api/v1/namespaces/deployment-9854/pods/webserver-deployment-595b5b9587-glpnp 9fef83b6-9de2-4ff3-bbb5-44fbb928bb42 19124095 0 2020-05-25 21:50:46 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 d56af1e4-c680-4043-88d4-c43961d259ab 0xc0050a2ab7 0xc0050a2ab8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4wnxn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4wnxn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4wnxn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 21:50:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 21:50:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 21:50:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 21:50:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-05-25 21:50:46 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 25 21:50:49.149: INFO: Pod "webserver-deployment-595b5b9587-gwwhj" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-gwwhj webserver-deployment-595b5b9587- deployment-9854 /api/v1/namespaces/deployment-9854/pods/webserver-deployment-595b5b9587-gwwhj a85c7596-96a0-4304-91e2-4cf13d5d1cf1 19124111 0 2020-05-25 21:50:46 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 d56af1e4-c680-4043-88d4-c43961d259ab 0xc0050a2c17 0xc0050a2c18}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4wnxn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4wnxn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4wnxn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 21:50:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 21:50:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 21:50:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 21:50:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-05-25 21:50:46 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 25 21:50:49.150: INFO: Pod "webserver-deployment-595b5b9587-jqvg7" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-jqvg7 webserver-deployment-595b5b9587- deployment-9854 /api/v1/namespaces/deployment-9854/pods/webserver-deployment-595b5b9587-jqvg7 7db69094-d85a-4569-8575-07da5b9b0bb5 19123718 0 2020-05-25 21:50:30 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 d56af1e4-c680-4043-88d4-c43961d259ab 0xc0050a2d77 0xc0050a2d78}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4wnxn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4wnxn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4wnxn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 21:50:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 21:50:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 21:50:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 21:50:30 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.110,StartTime:2020-05-25 21:50:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-25 21:50:40 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://64275257f0ff29844c42a2ae3b53ae362db8349645c96dd9dcc245b9e3299f44,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.110,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 25 21:50:49.150: INFO: Pod "webserver-deployment-595b5b9587-kgstm" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-kgstm webserver-deployment-595b5b9587- deployment-9854 /api/v1/namespaces/deployment-9854/pods/webserver-deployment-595b5b9587-kgstm 62f23884-bcd1-4633-9cf9-e264279f09a2 19123709 0 2020-05-25 21:50:30 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 d56af1e4-c680-4043-88d4-c43961d259ab 0xc0050a2ef7 0xc0050a2ef8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4wnxn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4wnxn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4wnxn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 21:50:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 21:50:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 21:50:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 21:50:30 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.150,StartTime:2020-05-25 21:50:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-25 21:50:40 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://91180affe9390dde2afc27955dc09a7570114d3f6af66c9de78bb09a59b6feed,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.150,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 25 21:50:49.150: INFO: Pod "webserver-deployment-595b5b9587-ktrdm" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-ktrdm webserver-deployment-595b5b9587- deployment-9854 /api/v1/namespaces/deployment-9854/pods/webserver-deployment-595b5b9587-ktrdm 0298e033-8726-4c94-83b4-01e082acdf1f 19123680 0 2020-05-25 21:50:30 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 d56af1e4-c680-4043-88d4-c43961d259ab 0xc0050a3077 0xc0050a3078}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4wnxn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4wnxn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4wnxn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 21:50:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 21:50:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 21:50:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 21:50:30 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.149,StartTime:2020-05-25 21:50:30 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-25 21:50:36 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://a5c17cd1908c5bea4cbae53d961043732aecf7967c733fa19cc0a85fbed684f1,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.149,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 25 21:50:49.150: INFO: Pod "webserver-deployment-595b5b9587-m8gqj" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-m8gqj webserver-deployment-595b5b9587- deployment-9854 /api/v1/namespaces/deployment-9854/pods/webserver-deployment-595b5b9587-m8gqj 967d6d23-4f9b-48ef-8d6b-438a18ef5429 19124057 0 2020-05-25 21:50:46 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 d56af1e4-c680-4043-88d4-c43961d259ab 0xc0050a31f7 0xc0050a31f8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4wnxn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4wnxn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4wnxn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 21:50:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 21:50:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 21:50:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 21:50:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-05-25 21:50:46 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 25 21:50:49.151: INFO: Pod "webserver-deployment-595b5b9587-n6547" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-n6547 webserver-deployment-595b5b9587- deployment-9854 /api/v1/namespaces/deployment-9854/pods/webserver-deployment-595b5b9587-n6547 2b4e3dc5-8c55-4002-85bd-cb18856d3813 19123673 0 2020-05-25 21:50:30 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 d56af1e4-c680-4043-88d4-c43961d259ab 0xc0050a3357 0xc0050a3358}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4wnxn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4wnxn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4wnxn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 21:50:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 21:50:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 21:50:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 21:50:30 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.109,StartTime:2020-05-25 21:50:30 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-25 21:50:36 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://9d772e7ce862394cd571c42523469a7e04a09c2c8b1f230ade09f3158994aa71,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.109,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 25 21:50:49.151: INFO: Pod "webserver-deployment-595b5b9587-n9lmh" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-n9lmh webserver-deployment-595b5b9587- deployment-9854 /api/v1/namespaces/deployment-9854/pods/webserver-deployment-595b5b9587-n9lmh 52454bb9-5893-4f30-a61a-fce08717a6db 19124063 0 2020-05-25 21:50:46 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 d56af1e4-c680-4043-88d4-c43961d259ab 0xc0050a34d7 0xc0050a34d8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4wnxn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4wnxn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4wnxn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 21:50:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 21:50:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 21:50:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 21:50:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-05-25 21:50:46 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 25 21:50:49.151: INFO: Pod "webserver-deployment-595b5b9587-qwjf4" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-qwjf4 webserver-deployment-595b5b9587- deployment-9854 /api/v1/namespaces/deployment-9854/pods/webserver-deployment-595b5b9587-qwjf4 162f6aa8-3be3-4bda-96b5-a74ec2237415 19123780 0 2020-05-25 21:50:30 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 d56af1e4-c680-4043-88d4-c43961d259ab 0xc0050a3637 0xc0050a3638}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4wnxn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4wnxn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4wnxn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 21:50:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 21:50:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 21:50:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 21:50:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.113,StartTime:2020-05-25 21:50:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-25 21:50:41 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://2fe59e5c577ce67110a8f7e689c45cdb9087371465085be4868da289e58d0ec1,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.113,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 25 21:50:49.151: INFO: Pod "webserver-deployment-595b5b9587-rszpz" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-rszpz webserver-deployment-595b5b9587- deployment-9854 /api/v1/namespaces/deployment-9854/pods/webserver-deployment-595b5b9587-rszpz 18f72651-c33a-4d0e-bf7e-ad3010ba8e01 19124104 0 2020-05-25 21:50:46 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 d56af1e4-c680-4043-88d4-c43961d259ab 0xc0050a37b7 0xc0050a37b8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4wnxn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4wnxn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4wnxn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 21:50:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 21:50:47 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 21:50:47 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 21:50:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-05-25 21:50:47 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 25 21:50:49.152: INFO: Pod "webserver-deployment-595b5b9587-sc6tg" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-sc6tg webserver-deployment-595b5b9587- deployment-9854 /api/v1/namespaces/deployment-9854/pods/webserver-deployment-595b5b9587-sc6tg cf3a6cce-0253-4570-9d46-1fb7d984267e 19124074 0 2020-05-25 21:50:46 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 d56af1e4-c680-4043-88d4-c43961d259ab 0xc0050a3917 0xc0050a3918}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4wnxn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4wnxn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4wnxn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 21:50:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 21:50:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 21:50:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 21:50:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-05-25 21:50:46 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 25 21:50:49.152: INFO: Pod "webserver-deployment-595b5b9587-zdsn5" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-zdsn5 webserver-deployment-595b5b9587- deployment-9854 /api/v1/namespaces/deployment-9854/pods/webserver-deployment-595b5b9587-zdsn5 e5124c82-eb41-436c-98b7-52e6256af5ac 19124050 0 2020-05-25 21:50:46 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 d56af1e4-c680-4043-88d4-c43961d259ab 0xc0050a3a77 0xc0050a3a78}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4wnxn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4wnxn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4wnxn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 21:50:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 21:50:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 21:50:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 21:50:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-05-25 21:50:46 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 25 21:50:49.152: INFO: Pod "webserver-deployment-c7997dcc8-4bp2l" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-4bp2l webserver-deployment-c7997dcc8- deployment-9854 /api/v1/namespaces/deployment-9854/pods/webserver-deployment-c7997dcc8-4bp2l 978ddf2d-8fc1-4730-aa33-5de1ecc12134 19124116 0 2020-05-25 21:50:43 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 2b14827f-e5ed-4a61-a73c-c72f2990aabd 0xc0050a3bd7 0xc0050a3bd8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4wnxn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4wnxn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4wnxn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 21:50:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 21:50:43 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 21:50:43 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 21:50:43 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.115,StartTime:2020-05-25 21:50:43 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.115,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 25 21:50:49.152: INFO: Pod "webserver-deployment-c7997dcc8-4t87g" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-4t87g webserver-deployment-c7997dcc8- deployment-9854 /api/v1/namespaces/deployment-9854/pods/webserver-deployment-c7997dcc8-4t87g c561ca68-eea8-42e8-a953-b9358be6e80c 19124076 0 2020-05-25 21:50:46 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 2b14827f-e5ed-4a61-a73c-c72f2990aabd 0xc0050a3d87 0xc0050a3d88}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4wnxn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4wnxn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4wnxn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 21:50:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 21:50:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 21:50:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 21:50:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-05-25 21:50:46 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 25 21:50:49.153: INFO: Pod "webserver-deployment-c7997dcc8-4v4q4" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-4v4q4 webserver-deployment-c7997dcc8- deployment-9854 /api/v1/namespaces/deployment-9854/pods/webserver-deployment-c7997dcc8-4v4q4 34aef553-4a19-4260-bc1b-08bd2e79fde4 19124052 0 2020-05-25 21:50:46 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 2b14827f-e5ed-4a61-a73c-c72f2990aabd 0xc0050a3f07 0xc0050a3f08}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4wnxn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4wnxn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4wnxn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 21:50:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 21:50:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 21:50:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 21:50:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-05-25 21:50:46 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 25 21:50:49.153: INFO: Pod "webserver-deployment-c7997dcc8-57cdm" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-57cdm webserver-deployment-c7997dcc8- deployment-9854 /api/v1/namespaces/deployment-9854/pods/webserver-deployment-c7997dcc8-57cdm db73f808-3aae-42fc-8626-ac4dbdb2d1b4 19124101 0 2020-05-25 21:50:43 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 2b14827f-e5ed-4a61-a73c-c72f2990aabd 0xc005126087 0xc005126088}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4wnxn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4wnxn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4wnxn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 21:50:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 21:50:43 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 21:50:43 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 21:50:43 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.154,StartTime:2020-05-25 21:50:43 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.154,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 25 21:50:49.154: INFO: Pod "webserver-deployment-c7997dcc8-d5bq4" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-d5bq4 webserver-deployment-c7997dcc8- deployment-9854 /api/v1/namespaces/deployment-9854/pods/webserver-deployment-c7997dcc8-d5bq4 692831d9-a3e7-426c-92d1-cdf01b2c59c0 19124069 0 2020-05-25 21:50:46 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 2b14827f-e5ed-4a61-a73c-c72f2990aabd 0xc005126237 0xc005126238}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4wnxn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4wnxn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4wnxn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 21:50:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 21:50:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 21:50:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 21:50:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-05-25 21:50:46 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 25 21:50:49.154: INFO: Pod "webserver-deployment-c7997dcc8-k7tvj" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-k7tvj webserver-deployment-c7997dcc8- deployment-9854 /api/v1/namespaces/deployment-9854/pods/webserver-deployment-c7997dcc8-k7tvj 2e198a7e-5988-4844-8c2a-ef022819c47d 19124079 0 2020-05-25 21:50:46 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 2b14827f-e5ed-4a61-a73c-c72f2990aabd 0xc0051263b7 0xc0051263b8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4wnxn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4wnxn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4wnxn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 21:50:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 21:50:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 21:50:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 21:50:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-05-25 21:50:46 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 25 21:50:49.154: INFO: Pod "webserver-deployment-c7997dcc8-lbqtx" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-lbqtx webserver-deployment-c7997dcc8- deployment-9854 /api/v1/namespaces/deployment-9854/pods/webserver-deployment-c7997dcc8-lbqtx 2a14d386-16a3-49cb-8769-6ff8bc08f909 19124081 0 2020-05-25 21:50:46 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 2b14827f-e5ed-4a61-a73c-c72f2990aabd 0xc005126537 0xc005126538}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4wnxn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4wnxn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4wnxn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 21:50:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 21:50:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 21:50:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 21:50:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-05-25 21:50:46 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 25 21:50:49.154: INFO: Pod "webserver-deployment-c7997dcc8-lsv26" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-lsv26 webserver-deployment-c7997dcc8- deployment-9854 /api/v1/namespaces/deployment-9854/pods/webserver-deployment-c7997dcc8-lsv26 fe21830d-acf1-421f-8d92-176c232dfbe4 19124071 0 2020-05-25 21:50:46 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 2b14827f-e5ed-4a61-a73c-c72f2990aabd 0xc0051266c7 0xc0051266c8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4wnxn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4wnxn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4wnxn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 21:50:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 21:50:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 21:50:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 21:50:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-05-25 21:50:46 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 25 21:50:49.154: INFO: Pod "webserver-deployment-c7997dcc8-nn4pm" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-nn4pm webserver-deployment-c7997dcc8- deployment-9854 /api/v1/namespaces/deployment-9854/pods/webserver-deployment-c7997dcc8-nn4pm c6f3f902-556b-42b8-802d-ecfd35f1d66f 19123950 0 2020-05-25 21:50:43 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 2b14827f-e5ed-4a61-a73c-c72f2990aabd 0xc005126847 0xc005126848}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4wnxn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4wnxn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4wnxn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 21:50:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 21:50:43 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 21:50:43 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 21:50:43 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-05-25 21:50:43 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 25 21:50:49.154: INFO: Pod "webserver-deployment-c7997dcc8-rzhp6" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-rzhp6 webserver-deployment-c7997dcc8- deployment-9854 /api/v1/namespaces/deployment-9854/pods/webserver-deployment-c7997dcc8-rzhp6 b8b517d7-7c1c-457f-8978-6edf5d9fbc02 19123937 0 2020-05-25 21:50:43 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 2b14827f-e5ed-4a61-a73c-c72f2990aabd 0xc0051269c7 0xc0051269c8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4wnxn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4wnxn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4wnxn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 21:50:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 21:50:43 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 21:50:43 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 21:50:43 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-05-25 21:50:43 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 25 21:50:49.155: INFO: Pod "webserver-deployment-c7997dcc8-snmrn" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-snmrn webserver-deployment-c7997dcc8- deployment-9854 /api/v1/namespaces/deployment-9854/pods/webserver-deployment-c7997dcc8-snmrn a32ff845-adf4-46cf-95c9-db8d37ead3c1 19124112 0 2020-05-25 21:50:43 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 2b14827f-e5ed-4a61-a73c-c72f2990aabd 0xc005126b47 0xc005126b48}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4wnxn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4wnxn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4wnxn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 21:50:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 21:50:43 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 21:50:43 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 21:50:43 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.114,StartTime:2020-05-25 21:50:43 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.114,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 25 21:50:49.155: INFO: Pod "webserver-deployment-c7997dcc8-tkwhn" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-tkwhn webserver-deployment-c7997dcc8- deployment-9854 /api/v1/namespaces/deployment-9854/pods/webserver-deployment-c7997dcc8-tkwhn b4a3d5f1-57c2-4236-be9c-c83a7e650c83 19124094 0 2020-05-25 21:50:46 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 2b14827f-e5ed-4a61-a73c-c72f2990aabd 0xc005126cf7 0xc005126cf8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4wnxn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4wnxn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4wnxn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 21:50:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 21:50:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 21:50:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 21:50:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-05-25 21:50:46 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 25 21:50:49.155: INFO: Pod "webserver-deployment-c7997dcc8-vmtr7" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-vmtr7 webserver-deployment-c7997dcc8- deployment-9854 /api/v1/namespaces/deployment-9854/pods/webserver-deployment-c7997dcc8-vmtr7 a41a9602-a081-4de3-b691-fd5d237962d8 19124031 0 2020-05-25 21:50:46 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 2b14827f-e5ed-4a61-a73c-c72f2990aabd 0xc005126e77 0xc005126e78}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4wnxn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4wnxn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4wnxn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 21:50:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:50:49.155: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-9854" for this suite. • [SLOW TEST:19.208 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":278,"completed":114,"skipped":1750,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:50:49.800: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification May 25 21:50:51.574: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2438 /api/v1/namespaces/watch-2438/configmaps/e2e-watch-test-configmap-a 2e2dcd51-2d42-4caf-817a-477be0052b04 19124131 0 2020-05-25 21:50:51 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} May 25 21:50:51.574: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2438 /api/v1/namespaces/watch-2438/configmaps/e2e-watch-test-configmap-a 2e2dcd51-2d42-4caf-817a-477be0052b04 19124131 0 2020-05-25 21:50:51 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification May 25 21:51:01.650: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2438 /api/v1/namespaces/watch-2438/configmaps/e2e-watch-test-configmap-a 2e2dcd51-2d42-4caf-817a-477be0052b04 19124225 0 2020-05-25 21:50:51 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} May 25 21:51:01.650: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2438 /api/v1/namespaces/watch-2438/configmaps/e2e-watch-test-configmap-a 2e2dcd51-2d42-4caf-817a-477be0052b04 19124225 0 2020-05-25 21:50:51 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification May 25 21:51:12.097: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2438 /api/v1/namespaces/watch-2438/configmaps/e2e-watch-test-configmap-a 2e2dcd51-2d42-4caf-817a-477be0052b04 19124485 0 2020-05-25 21:50:51 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 25 21:51:12.097: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2438 /api/v1/namespaces/watch-2438/configmaps/e2e-watch-test-configmap-a 2e2dcd51-2d42-4caf-817a-477be0052b04 19124485 0 2020-05-25 21:50:51 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification May 25 21:51:22.104: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2438 /api/v1/namespaces/watch-2438/configmaps/e2e-watch-test-configmap-a 2e2dcd51-2d42-4caf-817a-477be0052b04 19124525 0 2020-05-25 21:50:51 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 25 21:51:22.105: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2438 /api/v1/namespaces/watch-2438/configmaps/e2e-watch-test-configmap-a 2e2dcd51-2d42-4caf-817a-477be0052b04 19124525 0 2020-05-25 21:50:51 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification May 25 21:51:32.112: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-2438 /api/v1/namespaces/watch-2438/configmaps/e2e-watch-test-configmap-b 082bf09a-2402-42e7-9dc9-4b15aebf7827 19124555 0 2020-05-25 21:51:32 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} May 25 21:51:32.112: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-2438 /api/v1/namespaces/watch-2438/configmaps/e2e-watch-test-configmap-b 082bf09a-2402-42e7-9dc9-4b15aebf7827 19124555 0 2020-05-25 21:51:32 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification May 25 21:51:42.120: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-2438 /api/v1/namespaces/watch-2438/configmaps/e2e-watch-test-configmap-b 082bf09a-2402-42e7-9dc9-4b15aebf7827 19124585 0 2020-05-25 21:51:32 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} May 25 21:51:42.120: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-2438 /api/v1/namespaces/watch-2438/configmaps/e2e-watch-test-configmap-b 082bf09a-2402-42e7-9dc9-4b15aebf7827 19124585 0 2020-05-25 21:51:32 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:51:52.120: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-2438" for this suite. • [SLOW TEST:62.330 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":278,"completed":115,"skipped":1759,"failed":0} SS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:51:52.131: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready May 25 21:51:52.935: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set May 25 21:51:54.945: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726040312, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726040312, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726040313, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726040312, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 25 21:51:57.979: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 25 21:51:57.983: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:51:59.169: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-2470" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136 • [SLOW TEST:7.130 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":278,"completed":116,"skipped":1761,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:51:59.261: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 25 21:52:03.392: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:52:03.550: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-8306" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":278,"completed":117,"skipped":1771,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:52:03.567: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: set up a multi version CRD May 25 21:52:03.666: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:52:18.662: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-5448" for this suite. • [SLOW TEST:15.102 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":278,"completed":118,"skipped":1830,"failed":0} SS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:52:18.669: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0525 21:52:29.919927 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 25 21:52:29.919: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:52:29.920: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1143" for this suite. • [SLOW TEST:11.256 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":278,"completed":119,"skipped":1832,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:52:29.926: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-map-18fde126-54f9-4bfb-b623-72b974d9fd00 STEP: Creating a pod to test consume secrets May 25 21:52:30.054: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-e715b54d-6d5f-4f72-8ed4-81ede5236224" in namespace "projected-8752" to be "success or failure" May 25 21:52:30.066: INFO: Pod "pod-projected-secrets-e715b54d-6d5f-4f72-8ed4-81ede5236224": Phase="Pending", Reason="", readiness=false. Elapsed: 11.749559ms May 25 21:52:32.069: INFO: Pod "pod-projected-secrets-e715b54d-6d5f-4f72-8ed4-81ede5236224": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015463728s May 25 21:52:34.074: INFO: Pod "pod-projected-secrets-e715b54d-6d5f-4f72-8ed4-81ede5236224": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020280807s STEP: Saw pod success May 25 21:52:34.074: INFO: Pod "pod-projected-secrets-e715b54d-6d5f-4f72-8ed4-81ede5236224" satisfied condition "success or failure" May 25 21:52:34.078: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-e715b54d-6d5f-4f72-8ed4-81ede5236224 container projected-secret-volume-test: STEP: delete the pod May 25 21:52:34.126: INFO: Waiting for pod pod-projected-secrets-e715b54d-6d5f-4f72-8ed4-81ede5236224 to disappear May 25 21:52:34.134: INFO: Pod pod-projected-secrets-e715b54d-6d5f-4f72-8ed4-81ede5236224 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:52:34.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8752" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":120,"skipped":1895,"failed":0} SS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:52:34.142: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 25 21:52:34.678: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 25 21:52:37.351: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726040354, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726040354, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726040354, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726040354, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 25 21:52:39.384: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726040354, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726040354, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726040354, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726040354, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 25 21:52:42.385: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:52:42.499: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1573" for this suite. STEP: Destroying namespace "webhook-1573-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.471 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":278,"completed":121,"skipped":1897,"failed":0} S ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:52:42.612: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod May 25 21:52:42.656: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:52:50.354: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-6419" for this suite. • [SLOW TEST:7.763 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":278,"completed":122,"skipped":1898,"failed":0} [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:52:50.376: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8465.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-8465.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8465.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-8465.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 25 21:52:56.522: INFO: DNS probes using dns-test-c065fd91-ce5a-4d66-801e-9efc7da09742 succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8465.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-8465.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8465.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-8465.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 25 21:53:04.640: INFO: File wheezy_udp@dns-test-service-3.dns-8465.svc.cluster.local from pod dns-8465/dns-test-3223e9c2-56b5-4f8b-98f5-f7a4443e6010 contains 'foo.example.com. ' instead of 'bar.example.com.' May 25 21:53:04.644: INFO: File jessie_udp@dns-test-service-3.dns-8465.svc.cluster.local from pod dns-8465/dns-test-3223e9c2-56b5-4f8b-98f5-f7a4443e6010 contains 'foo.example.com. ' instead of 'bar.example.com.' May 25 21:53:04.644: INFO: Lookups using dns-8465/dns-test-3223e9c2-56b5-4f8b-98f5-f7a4443e6010 failed for: [wheezy_udp@dns-test-service-3.dns-8465.svc.cluster.local jessie_udp@dns-test-service-3.dns-8465.svc.cluster.local] May 25 21:53:09.650: INFO: File wheezy_udp@dns-test-service-3.dns-8465.svc.cluster.local from pod dns-8465/dns-test-3223e9c2-56b5-4f8b-98f5-f7a4443e6010 contains 'foo.example.com. ' instead of 'bar.example.com.' May 25 21:53:09.653: INFO: File jessie_udp@dns-test-service-3.dns-8465.svc.cluster.local from pod dns-8465/dns-test-3223e9c2-56b5-4f8b-98f5-f7a4443e6010 contains 'foo.example.com. ' instead of 'bar.example.com.' May 25 21:53:09.653: INFO: Lookups using dns-8465/dns-test-3223e9c2-56b5-4f8b-98f5-f7a4443e6010 failed for: [wheezy_udp@dns-test-service-3.dns-8465.svc.cluster.local jessie_udp@dns-test-service-3.dns-8465.svc.cluster.local] May 25 21:53:14.648: INFO: File wheezy_udp@dns-test-service-3.dns-8465.svc.cluster.local from pod dns-8465/dns-test-3223e9c2-56b5-4f8b-98f5-f7a4443e6010 contains 'foo.example.com. ' instead of 'bar.example.com.' May 25 21:53:14.652: INFO: File jessie_udp@dns-test-service-3.dns-8465.svc.cluster.local from pod dns-8465/dns-test-3223e9c2-56b5-4f8b-98f5-f7a4443e6010 contains 'foo.example.com. ' instead of 'bar.example.com.' May 25 21:53:14.652: INFO: Lookups using dns-8465/dns-test-3223e9c2-56b5-4f8b-98f5-f7a4443e6010 failed for: [wheezy_udp@dns-test-service-3.dns-8465.svc.cluster.local jessie_udp@dns-test-service-3.dns-8465.svc.cluster.local] May 25 21:53:19.673: INFO: File wheezy_udp@dns-test-service-3.dns-8465.svc.cluster.local from pod dns-8465/dns-test-3223e9c2-56b5-4f8b-98f5-f7a4443e6010 contains 'foo.example.com. ' instead of 'bar.example.com.' May 25 21:53:19.712: INFO: File jessie_udp@dns-test-service-3.dns-8465.svc.cluster.local from pod dns-8465/dns-test-3223e9c2-56b5-4f8b-98f5-f7a4443e6010 contains 'foo.example.com. ' instead of 'bar.example.com.' May 25 21:53:19.712: INFO: Lookups using dns-8465/dns-test-3223e9c2-56b5-4f8b-98f5-f7a4443e6010 failed for: [wheezy_udp@dns-test-service-3.dns-8465.svc.cluster.local jessie_udp@dns-test-service-3.dns-8465.svc.cluster.local] May 25 21:53:24.652: INFO: DNS probes using dns-test-3223e9c2-56b5-4f8b-98f5-f7a4443e6010 succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8465.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-8465.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8465.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-8465.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 25 21:53:32.833: INFO: DNS probes using dns-test-8b848c3f-ca6a-4a8e-92b4-20e2075dba63 succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:53:32.947: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8465" for this suite. • [SLOW TEST:42.597 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":278,"completed":123,"skipped":1898,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:53:32.973: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-511f70ca-5004-4fd8-aaea-ef1265bcdfb7 STEP: Creating a pod to test consume configMaps May 25 21:53:33.286: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-976a89b7-9070-40dc-9357-619cf59afd4b" in namespace "projected-1768" to be "success or failure" May 25 21:53:33.360: INFO: Pod "pod-projected-configmaps-976a89b7-9070-40dc-9357-619cf59afd4b": Phase="Pending", Reason="", readiness=false. Elapsed: 74.530762ms May 25 21:53:35.365: INFO: Pod "pod-projected-configmaps-976a89b7-9070-40dc-9357-619cf59afd4b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.078653899s May 25 21:53:37.369: INFO: Pod "pod-projected-configmaps-976a89b7-9070-40dc-9357-619cf59afd4b": Phase="Running", Reason="", readiness=true. Elapsed: 4.083067686s May 25 21:53:39.373: INFO: Pod "pod-projected-configmaps-976a89b7-9070-40dc-9357-619cf59afd4b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.087016735s STEP: Saw pod success May 25 21:53:39.373: INFO: Pod "pod-projected-configmaps-976a89b7-9070-40dc-9357-619cf59afd4b" satisfied condition "success or failure" May 25 21:53:39.376: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-976a89b7-9070-40dc-9357-619cf59afd4b container projected-configmap-volume-test: STEP: delete the pod May 25 21:53:39.476: INFO: Waiting for pod pod-projected-configmaps-976a89b7-9070-40dc-9357-619cf59afd4b to disappear May 25 21:53:39.483: INFO: Pod pod-projected-configmaps-976a89b7-9070-40dc-9357-619cf59afd4b no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:53:39.483: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1768" for this suite. • [SLOW TEST:6.516 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":124,"skipped":1904,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:53:39.489: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-5607.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-5607.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-5607.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5607.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-5607.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-5607.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-5607.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-5607.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5607.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-5607.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-5607.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-5607.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-5607.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-5607.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-5607.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-5607.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-5607.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5607.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 25 21:53:45.871: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5607.svc.cluster.local from pod dns-5607/dns-test-c49b9abc-9b87-40a9-89a7-14df8faa38ec: the server could not find the requested resource (get pods dns-test-c49b9abc-9b87-40a9-89a7-14df8faa38ec) May 25 21:53:45.875: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5607.svc.cluster.local from pod dns-5607/dns-test-c49b9abc-9b87-40a9-89a7-14df8faa38ec: the server could not find the requested resource (get pods dns-test-c49b9abc-9b87-40a9-89a7-14df8faa38ec) May 25 21:53:45.878: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5607.svc.cluster.local from pod dns-5607/dns-test-c49b9abc-9b87-40a9-89a7-14df8faa38ec: the server could not find the requested resource (get pods dns-test-c49b9abc-9b87-40a9-89a7-14df8faa38ec) May 25 21:53:45.881: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5607.svc.cluster.local from pod dns-5607/dns-test-c49b9abc-9b87-40a9-89a7-14df8faa38ec: the server could not find the requested resource (get pods dns-test-c49b9abc-9b87-40a9-89a7-14df8faa38ec) May 25 21:53:45.889: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5607.svc.cluster.local from pod dns-5607/dns-test-c49b9abc-9b87-40a9-89a7-14df8faa38ec: the server could not find the requested resource (get pods dns-test-c49b9abc-9b87-40a9-89a7-14df8faa38ec) May 25 21:53:45.892: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5607.svc.cluster.local from pod dns-5607/dns-test-c49b9abc-9b87-40a9-89a7-14df8faa38ec: the server could not find the requested resource (get pods dns-test-c49b9abc-9b87-40a9-89a7-14df8faa38ec) May 25 21:53:45.894: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5607.svc.cluster.local from pod dns-5607/dns-test-c49b9abc-9b87-40a9-89a7-14df8faa38ec: the server could not find the requested resource (get pods dns-test-c49b9abc-9b87-40a9-89a7-14df8faa38ec) May 25 21:53:45.898: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5607.svc.cluster.local from pod dns-5607/dns-test-c49b9abc-9b87-40a9-89a7-14df8faa38ec: the server could not find the requested resource (get pods dns-test-c49b9abc-9b87-40a9-89a7-14df8faa38ec) May 25 21:53:45.904: INFO: Lookups using dns-5607/dns-test-c49b9abc-9b87-40a9-89a7-14df8faa38ec failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5607.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5607.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5607.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5607.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5607.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5607.svc.cluster.local jessie_udp@dns-test-service-2.dns-5607.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5607.svc.cluster.local] May 25 21:53:50.910: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5607.svc.cluster.local from pod dns-5607/dns-test-c49b9abc-9b87-40a9-89a7-14df8faa38ec: the server could not find the requested resource (get pods dns-test-c49b9abc-9b87-40a9-89a7-14df8faa38ec) May 25 21:53:50.913: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5607.svc.cluster.local from pod dns-5607/dns-test-c49b9abc-9b87-40a9-89a7-14df8faa38ec: the server could not find the requested resource (get pods dns-test-c49b9abc-9b87-40a9-89a7-14df8faa38ec) May 25 21:53:50.917: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5607.svc.cluster.local from pod dns-5607/dns-test-c49b9abc-9b87-40a9-89a7-14df8faa38ec: the server could not find the requested resource (get pods dns-test-c49b9abc-9b87-40a9-89a7-14df8faa38ec) May 25 21:53:50.920: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5607.svc.cluster.local from pod dns-5607/dns-test-c49b9abc-9b87-40a9-89a7-14df8faa38ec: the server could not find the requested resource (get pods dns-test-c49b9abc-9b87-40a9-89a7-14df8faa38ec) May 25 21:53:50.929: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5607.svc.cluster.local from pod dns-5607/dns-test-c49b9abc-9b87-40a9-89a7-14df8faa38ec: the server could not find the requested resource (get pods dns-test-c49b9abc-9b87-40a9-89a7-14df8faa38ec) May 25 21:53:50.932: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5607.svc.cluster.local from pod dns-5607/dns-test-c49b9abc-9b87-40a9-89a7-14df8faa38ec: the server could not find the requested resource (get pods dns-test-c49b9abc-9b87-40a9-89a7-14df8faa38ec) May 25 21:53:50.934: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5607.svc.cluster.local from pod dns-5607/dns-test-c49b9abc-9b87-40a9-89a7-14df8faa38ec: the server could not find the requested resource (get pods dns-test-c49b9abc-9b87-40a9-89a7-14df8faa38ec) May 25 21:53:50.937: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5607.svc.cluster.local from pod dns-5607/dns-test-c49b9abc-9b87-40a9-89a7-14df8faa38ec: the server could not find the requested resource (get pods dns-test-c49b9abc-9b87-40a9-89a7-14df8faa38ec) May 25 21:53:50.974: INFO: Lookups using dns-5607/dns-test-c49b9abc-9b87-40a9-89a7-14df8faa38ec failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5607.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5607.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5607.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5607.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5607.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5607.svc.cluster.local jessie_udp@dns-test-service-2.dns-5607.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5607.svc.cluster.local] May 25 21:53:55.910: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5607.svc.cluster.local from pod dns-5607/dns-test-c49b9abc-9b87-40a9-89a7-14df8faa38ec: the server could not find the requested resource (get pods dns-test-c49b9abc-9b87-40a9-89a7-14df8faa38ec) May 25 21:53:55.914: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5607.svc.cluster.local from pod dns-5607/dns-test-c49b9abc-9b87-40a9-89a7-14df8faa38ec: the server could not find the requested resource (get pods dns-test-c49b9abc-9b87-40a9-89a7-14df8faa38ec) May 25 21:53:55.918: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5607.svc.cluster.local from pod dns-5607/dns-test-c49b9abc-9b87-40a9-89a7-14df8faa38ec: the server could not find the requested resource (get pods dns-test-c49b9abc-9b87-40a9-89a7-14df8faa38ec) May 25 21:53:55.920: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5607.svc.cluster.local from pod dns-5607/dns-test-c49b9abc-9b87-40a9-89a7-14df8faa38ec: the server could not find the requested resource (get pods dns-test-c49b9abc-9b87-40a9-89a7-14df8faa38ec) May 25 21:53:55.929: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5607.svc.cluster.local from pod dns-5607/dns-test-c49b9abc-9b87-40a9-89a7-14df8faa38ec: the server could not find the requested resource (get pods dns-test-c49b9abc-9b87-40a9-89a7-14df8faa38ec) May 25 21:53:55.932: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5607.svc.cluster.local from pod dns-5607/dns-test-c49b9abc-9b87-40a9-89a7-14df8faa38ec: the server could not find the requested resource (get pods dns-test-c49b9abc-9b87-40a9-89a7-14df8faa38ec) May 25 21:53:55.935: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5607.svc.cluster.local from pod dns-5607/dns-test-c49b9abc-9b87-40a9-89a7-14df8faa38ec: the server could not find the requested resource (get pods dns-test-c49b9abc-9b87-40a9-89a7-14df8faa38ec) May 25 21:53:55.938: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5607.svc.cluster.local from pod dns-5607/dns-test-c49b9abc-9b87-40a9-89a7-14df8faa38ec: the server could not find the requested resource (get pods dns-test-c49b9abc-9b87-40a9-89a7-14df8faa38ec) May 25 21:53:55.943: INFO: Lookups using dns-5607/dns-test-c49b9abc-9b87-40a9-89a7-14df8faa38ec failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5607.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5607.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5607.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5607.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5607.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5607.svc.cluster.local jessie_udp@dns-test-service-2.dns-5607.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5607.svc.cluster.local] May 25 21:54:00.909: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5607.svc.cluster.local from pod dns-5607/dns-test-c49b9abc-9b87-40a9-89a7-14df8faa38ec: the server could not find the requested resource (get pods dns-test-c49b9abc-9b87-40a9-89a7-14df8faa38ec) May 25 21:54:00.913: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5607.svc.cluster.local from pod dns-5607/dns-test-c49b9abc-9b87-40a9-89a7-14df8faa38ec: the server could not find the requested resource (get pods dns-test-c49b9abc-9b87-40a9-89a7-14df8faa38ec) May 25 21:54:00.916: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5607.svc.cluster.local from pod dns-5607/dns-test-c49b9abc-9b87-40a9-89a7-14df8faa38ec: the server could not find the requested resource (get pods dns-test-c49b9abc-9b87-40a9-89a7-14df8faa38ec) May 25 21:54:00.919: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5607.svc.cluster.local from pod dns-5607/dns-test-c49b9abc-9b87-40a9-89a7-14df8faa38ec: the server could not find the requested resource (get pods dns-test-c49b9abc-9b87-40a9-89a7-14df8faa38ec) May 25 21:54:00.928: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5607.svc.cluster.local from pod dns-5607/dns-test-c49b9abc-9b87-40a9-89a7-14df8faa38ec: the server could not find the requested resource (get pods dns-test-c49b9abc-9b87-40a9-89a7-14df8faa38ec) May 25 21:54:00.930: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5607.svc.cluster.local from pod dns-5607/dns-test-c49b9abc-9b87-40a9-89a7-14df8faa38ec: the server could not find the requested resource (get pods dns-test-c49b9abc-9b87-40a9-89a7-14df8faa38ec) May 25 21:54:00.933: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5607.svc.cluster.local from pod dns-5607/dns-test-c49b9abc-9b87-40a9-89a7-14df8faa38ec: the server could not find the requested resource (get pods dns-test-c49b9abc-9b87-40a9-89a7-14df8faa38ec) May 25 21:54:00.935: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5607.svc.cluster.local from pod dns-5607/dns-test-c49b9abc-9b87-40a9-89a7-14df8faa38ec: the server could not find the requested resource (get pods dns-test-c49b9abc-9b87-40a9-89a7-14df8faa38ec) May 25 21:54:00.941: INFO: Lookups using dns-5607/dns-test-c49b9abc-9b87-40a9-89a7-14df8faa38ec failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5607.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5607.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5607.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5607.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5607.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5607.svc.cluster.local jessie_udp@dns-test-service-2.dns-5607.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5607.svc.cluster.local] May 25 21:54:05.910: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5607.svc.cluster.local from pod dns-5607/dns-test-c49b9abc-9b87-40a9-89a7-14df8faa38ec: the server could not find the requested resource (get pods dns-test-c49b9abc-9b87-40a9-89a7-14df8faa38ec) May 25 21:54:05.914: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5607.svc.cluster.local from pod dns-5607/dns-test-c49b9abc-9b87-40a9-89a7-14df8faa38ec: the server could not find the requested resource (get pods dns-test-c49b9abc-9b87-40a9-89a7-14df8faa38ec) May 25 21:54:05.917: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5607.svc.cluster.local from pod dns-5607/dns-test-c49b9abc-9b87-40a9-89a7-14df8faa38ec: the server could not find the requested resource (get pods dns-test-c49b9abc-9b87-40a9-89a7-14df8faa38ec) May 25 21:54:05.921: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5607.svc.cluster.local from pod dns-5607/dns-test-c49b9abc-9b87-40a9-89a7-14df8faa38ec: the server could not find the requested resource (get pods dns-test-c49b9abc-9b87-40a9-89a7-14df8faa38ec) May 25 21:54:05.931: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5607.svc.cluster.local from pod dns-5607/dns-test-c49b9abc-9b87-40a9-89a7-14df8faa38ec: the server could not find the requested resource (get pods dns-test-c49b9abc-9b87-40a9-89a7-14df8faa38ec) May 25 21:54:05.934: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5607.svc.cluster.local from pod dns-5607/dns-test-c49b9abc-9b87-40a9-89a7-14df8faa38ec: the server could not find the requested resource (get pods dns-test-c49b9abc-9b87-40a9-89a7-14df8faa38ec) May 25 21:54:05.938: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5607.svc.cluster.local from pod dns-5607/dns-test-c49b9abc-9b87-40a9-89a7-14df8faa38ec: the server could not find the requested resource (get pods dns-test-c49b9abc-9b87-40a9-89a7-14df8faa38ec) May 25 21:54:05.941: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5607.svc.cluster.local from pod dns-5607/dns-test-c49b9abc-9b87-40a9-89a7-14df8faa38ec: the server could not find the requested resource (get pods dns-test-c49b9abc-9b87-40a9-89a7-14df8faa38ec) May 25 21:54:05.947: INFO: Lookups using dns-5607/dns-test-c49b9abc-9b87-40a9-89a7-14df8faa38ec failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5607.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5607.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5607.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5607.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5607.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5607.svc.cluster.local jessie_udp@dns-test-service-2.dns-5607.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5607.svc.cluster.local] May 25 21:54:10.910: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5607.svc.cluster.local from pod dns-5607/dns-test-c49b9abc-9b87-40a9-89a7-14df8faa38ec: the server could not find the requested resource (get pods dns-test-c49b9abc-9b87-40a9-89a7-14df8faa38ec) May 25 21:54:10.914: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5607.svc.cluster.local from pod dns-5607/dns-test-c49b9abc-9b87-40a9-89a7-14df8faa38ec: the server could not find the requested resource (get pods dns-test-c49b9abc-9b87-40a9-89a7-14df8faa38ec) May 25 21:54:10.931: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5607.svc.cluster.local from pod dns-5607/dns-test-c49b9abc-9b87-40a9-89a7-14df8faa38ec: the server could not find the requested resource (get pods dns-test-c49b9abc-9b87-40a9-89a7-14df8faa38ec) May 25 21:54:10.934: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5607.svc.cluster.local from pod dns-5607/dns-test-c49b9abc-9b87-40a9-89a7-14df8faa38ec: the server could not find the requested resource (get pods dns-test-c49b9abc-9b87-40a9-89a7-14df8faa38ec) May 25 21:54:10.944: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5607.svc.cluster.local from pod dns-5607/dns-test-c49b9abc-9b87-40a9-89a7-14df8faa38ec: the server could not find the requested resource (get pods dns-test-c49b9abc-9b87-40a9-89a7-14df8faa38ec) May 25 21:54:10.947: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5607.svc.cluster.local from pod dns-5607/dns-test-c49b9abc-9b87-40a9-89a7-14df8faa38ec: the server could not find the requested resource (get pods dns-test-c49b9abc-9b87-40a9-89a7-14df8faa38ec) May 25 21:54:10.950: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5607.svc.cluster.local from pod dns-5607/dns-test-c49b9abc-9b87-40a9-89a7-14df8faa38ec: the server could not find the requested resource (get pods dns-test-c49b9abc-9b87-40a9-89a7-14df8faa38ec) May 25 21:54:10.953: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5607.svc.cluster.local from pod dns-5607/dns-test-c49b9abc-9b87-40a9-89a7-14df8faa38ec: the server could not find the requested resource (get pods dns-test-c49b9abc-9b87-40a9-89a7-14df8faa38ec) May 25 21:54:10.960: INFO: Lookups using dns-5607/dns-test-c49b9abc-9b87-40a9-89a7-14df8faa38ec failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5607.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5607.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5607.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5607.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5607.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5607.svc.cluster.local jessie_udp@dns-test-service-2.dns-5607.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5607.svc.cluster.local] May 25 21:54:15.947: INFO: DNS probes using dns-5607/dns-test-c49b9abc-9b87-40a9-89a7-14df8faa38ec succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:54:16.777: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5607" for this suite. • [SLOW TEST:37.400 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":278,"completed":125,"skipped":1929,"failed":0} SSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:54:16.890: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:54:25.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-142" for this suite. • [SLOW TEST:8.200 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":278,"completed":126,"skipped":1934,"failed":0} SSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:54:25.090: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1626 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 25 21:54:25.223: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --generator=deployment/apps.v1 --namespace=kubectl-603' May 25 21:54:27.976: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 25 21:54:27.976: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n" STEP: verifying the deployment e2e-test-httpd-deployment was created STEP: verifying the pod controlled by deployment e2e-test-httpd-deployment was created [AfterEach] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1631 May 25 21:54:30.067: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-603' May 25 21:54:30.579: INFO: stderr: "" May 25 21:54:30.579: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:54:30.579: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-603" for this suite. • [SLOW TEST:5.497 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1622 should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image [Conformance]","total":278,"completed":127,"skipped":1939,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:54:30.587: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:55:02.090: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-4461" for this suite. STEP: Destroying namespace "nsdeletetest-7030" for this suite. May 25 21:55:02.107: INFO: Namespace nsdeletetest-7030 was already deleted STEP: Destroying namespace "nsdeletetest-1165" for this suite. • [SLOW TEST:31.524 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":278,"completed":128,"skipped":1959,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:55:02.111: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1681 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 25 21:55:02.148: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-9917' May 25 21:55:02.263: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 25 21:55:02.263: INFO: stdout: "job.batch/e2e-test-httpd-job created\n" STEP: verifying the job e2e-test-httpd-job was created [AfterEach] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1686 May 25 21:55:02.303: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-httpd-job --namespace=kubectl-9917' May 25 21:55:02.412: INFO: stderr: "" May 25 21:55:02.412: INFO: stdout: "job.batch \"e2e-test-httpd-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:55:02.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9917" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure [Conformance]","total":278,"completed":129,"skipped":1973,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:55:02.419: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-projected-all-test-volume-83deccb2-b8d0-4669-90cf-c5015031faf6 STEP: Creating secret with name secret-projected-all-test-volume-fe3e9142-5123-4175-81b6-4d8a114b62ce STEP: Creating a pod to test Check all projections for projected volume plugin May 25 21:55:02.516: INFO: Waiting up to 5m0s for pod "projected-volume-cd257c61-1537-470b-9fcc-e17dc3acef79" in namespace "projected-6933" to be "success or failure" May 25 21:55:02.534: INFO: Pod "projected-volume-cd257c61-1537-470b-9fcc-e17dc3acef79": Phase="Pending", Reason="", readiness=false. Elapsed: 18.190353ms May 25 21:55:04.541: INFO: Pod "projected-volume-cd257c61-1537-470b-9fcc-e17dc3acef79": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025348224s May 25 21:55:06.546: INFO: Pod "projected-volume-cd257c61-1537-470b-9fcc-e17dc3acef79": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030324693s May 25 21:55:08.549: INFO: Pod "projected-volume-cd257c61-1537-470b-9fcc-e17dc3acef79": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.0336366s STEP: Saw pod success May 25 21:55:08.549: INFO: Pod "projected-volume-cd257c61-1537-470b-9fcc-e17dc3acef79" satisfied condition "success or failure" May 25 21:55:08.552: INFO: Trying to get logs from node jerma-worker pod projected-volume-cd257c61-1537-470b-9fcc-e17dc3acef79 container projected-all-volume-test: STEP: delete the pod May 25 21:55:08.592: INFO: Waiting for pod projected-volume-cd257c61-1537-470b-9fcc-e17dc3acef79 to disappear May 25 21:55:08.615: INFO: Pod projected-volume-cd257c61-1537-470b-9fcc-e17dc3acef79 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:55:08.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6933" for this suite. • [SLOW TEST:6.203 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":278,"completed":130,"skipped":2049,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:55:08.623: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-fa1fc9cd-c42b-44d2-be50-d693f6a2ce4b STEP: Creating a pod to test consume configMaps May 25 21:55:08.748: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-2370eb08-0e83-45cc-b4b9-8230026b13e0" in namespace "projected-4107" to be "success or failure" May 25 21:55:08.753: INFO: Pod "pod-projected-configmaps-2370eb08-0e83-45cc-b4b9-8230026b13e0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.667488ms May 25 21:55:10.830: INFO: Pod "pod-projected-configmaps-2370eb08-0e83-45cc-b4b9-8230026b13e0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.081432194s May 25 21:55:12.833: INFO: Pod "pod-projected-configmaps-2370eb08-0e83-45cc-b4b9-8230026b13e0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.085042232s STEP: Saw pod success May 25 21:55:12.833: INFO: Pod "pod-projected-configmaps-2370eb08-0e83-45cc-b4b9-8230026b13e0" satisfied condition "success or failure" May 25 21:55:12.836: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-2370eb08-0e83-45cc-b4b9-8230026b13e0 container projected-configmap-volume-test: STEP: delete the pod May 25 21:55:12.925: INFO: Waiting for pod pod-projected-configmaps-2370eb08-0e83-45cc-b4b9-8230026b13e0 to disappear May 25 21:55:12.927: INFO: Pod pod-projected-configmaps-2370eb08-0e83-45cc-b4b9-8230026b13e0 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:55:12.927: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4107" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":131,"skipped":2074,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:55:12.956: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod liveness-457833b0-41fd-49ab-b1e0-471d1119d25b in namespace container-probe-2920 May 25 21:55:17.280: INFO: Started pod liveness-457833b0-41fd-49ab-b1e0-471d1119d25b in namespace container-probe-2920 STEP: checking the pod's current state and verifying that restartCount is present May 25 21:55:17.283: INFO: Initial restart count of pod liveness-457833b0-41fd-49ab-b1e0-471d1119d25b is 0 May 25 21:55:33.344: INFO: Restart count of pod container-probe-2920/liveness-457833b0-41fd-49ab-b1e0-471d1119d25b is now 1 (16.060696801s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:55:33.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2920" for this suite. • [SLOW TEST:20.438 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":132,"skipped":2106,"failed":0} SSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:55:33.395: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 May 25 21:55:33.484: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 25 21:55:33.493: INFO: Waiting for terminating namespaces to be deleted... May 25 21:55:33.494: INFO: Logging pods the kubelet thinks is on node jerma-worker before test May 25 21:55:33.498: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 25 21:55:33.498: INFO: Container kindnet-cni ready: true, restart count 0 May 25 21:55:33.498: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 25 21:55:33.498: INFO: Container kube-proxy ready: true, restart count 0 May 25 21:55:33.498: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test May 25 21:55:33.503: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 25 21:55:33.503: INFO: Container kindnet-cni ready: true, restart count 0 May 25 21:55:33.503: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) May 25 21:55:33.503: INFO: Container kube-bench ready: false, restart count 0 May 25 21:55:33.503: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 25 21:55:33.503: INFO: Container kube-proxy ready: true, restart count 0 May 25 21:55:33.503: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) May 25 21:55:33.503: INFO: Container kube-hunter ready: false, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: verifying the node has the label node jerma-worker STEP: verifying the node has the label node jerma-worker2 May 25 21:55:33.986: INFO: Pod kindnet-c5svj requesting resource cpu=100m on Node jerma-worker May 25 21:55:33.986: INFO: Pod kindnet-zk6sq requesting resource cpu=100m on Node jerma-worker2 May 25 21:55:33.986: INFO: Pod kube-proxy-44mlz requesting resource cpu=0m on Node jerma-worker May 25 21:55:33.986: INFO: Pod kube-proxy-75q42 requesting resource cpu=0m on Node jerma-worker2 STEP: Starting Pods to consume most of the cluster CPU. May 25 21:55:33.986: INFO: Creating a pod which consumes cpu=11130m on Node jerma-worker May 25 21:55:33.994: INFO: Creating a pod which consumes cpu=11130m on Node jerma-worker2 STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-14e0af23-14a0-414e-a36c-34227aa3f4a8.161264264b918011], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5900/filler-pod-14e0af23-14a0-414e-a36c-34227aa3f4a8 to jerma-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-14e0af23-14a0-414e-a36c-34227aa3f4a8.161264269c1c9956], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-14e0af23-14a0-414e-a36c-34227aa3f4a8.16126426e234153a], Reason = [Created], Message = [Created container filler-pod-14e0af23-14a0-414e-a36c-34227aa3f4a8] STEP: Considering event: Type = [Normal], Name = [filler-pod-14e0af23-14a0-414e-a36c-34227aa3f4a8.16126426f84b7e51], Reason = [Started], Message = [Started container filler-pod-14e0af23-14a0-414e-a36c-34227aa3f4a8] STEP: Considering event: Type = [Normal], Name = [filler-pod-ed49c5d9-2b34-4e00-8f06-a6b1718f0c2d.161264264c986058], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5900/filler-pod-ed49c5d9-2b34-4e00-8f06-a6b1718f0c2d to jerma-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-ed49c5d9-2b34-4e00-8f06-a6b1718f0c2d.16126426d7a0bfc4], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-ed49c5d9-2b34-4e00-8f06-a6b1718f0c2d.16126427120f9e38], Reason = [Created], Message = [Created container filler-pod-ed49c5d9-2b34-4e00-8f06-a6b1718f0c2d] STEP: Considering event: Type = [Normal], Name = [filler-pod-ed49c5d9-2b34-4e00-8f06-a6b1718f0c2d.1612642726937911], Reason = [Started], Message = [Started container filler-pod-ed49c5d9-2b34-4e00-8f06-a6b1718f0c2d] STEP: Considering event: Type = [Warning], Name = [additional-pod.161264273bfba945], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node jerma-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node jerma-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:55:39.216: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-5900" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:5.849 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":278,"completed":133,"skipped":2111,"failed":0} SS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:55:39.244: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 25 21:55:39.362: INFO: Create a RollingUpdate DaemonSet May 25 21:55:39.366: INFO: Check that daemon pods launch on every node of the cluster May 25 21:55:39.374: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 21:55:39.378: INFO: Number of nodes with available pods: 0 May 25 21:55:39.378: INFO: Node jerma-worker is running more than one daemon pod May 25 21:55:40.383: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 21:55:40.387: INFO: Number of nodes with available pods: 0 May 25 21:55:40.387: INFO: Node jerma-worker is running more than one daemon pod May 25 21:55:41.383: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 21:55:41.386: INFO: Number of nodes with available pods: 0 May 25 21:55:41.386: INFO: Node jerma-worker is running more than one daemon pod May 25 21:55:42.382: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 21:55:42.385: INFO: Number of nodes with available pods: 0 May 25 21:55:42.385: INFO: Node jerma-worker is running more than one daemon pod May 25 21:55:43.383: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 21:55:43.386: INFO: Number of nodes with available pods: 0 May 25 21:55:43.386: INFO: Node jerma-worker is running more than one daemon pod May 25 21:55:44.392: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 21:55:44.396: INFO: Number of nodes with available pods: 2 May 25 21:55:44.396: INFO: Number of running nodes: 2, number of available pods: 2 May 25 21:55:44.396: INFO: Update the DaemonSet to trigger a rollout May 25 21:55:44.404: INFO: Updating DaemonSet daemon-set May 25 21:55:59.448: INFO: Roll back the DaemonSet before rollout is complete May 25 21:55:59.455: INFO: Updating DaemonSet daemon-set May 25 21:55:59.455: INFO: Make sure DaemonSet rollback is complete May 25 21:55:59.460: INFO: Wrong image for pod: daemon-set-xplkh. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. May 25 21:55:59.460: INFO: Pod daemon-set-xplkh is not available May 25 21:55:59.519: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 21:56:00.609: INFO: Wrong image for pod: daemon-set-xplkh. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. May 25 21:56:00.609: INFO: Pod daemon-set-xplkh is not available May 25 21:56:00.613: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 21:56:01.524: INFO: Wrong image for pod: daemon-set-xplkh. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. May 25 21:56:01.524: INFO: Pod daemon-set-xplkh is not available May 25 21:56:01.528: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 21:56:02.524: INFO: Wrong image for pod: daemon-set-xplkh. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. May 25 21:56:02.524: INFO: Pod daemon-set-xplkh is not available May 25 21:56:02.527: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 21:56:03.524: INFO: Pod daemon-set-9c75z is not available May 25 21:56:03.530: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5815, will wait for the garbage collector to delete the pods May 25 21:56:03.596: INFO: Deleting DaemonSet.extensions daemon-set took: 6.925783ms May 25 21:56:03.896: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.220456ms May 25 21:56:09.599: INFO: Number of nodes with available pods: 0 May 25 21:56:09.599: INFO: Number of running nodes: 0, number of available pods: 0 May 25 21:56:09.602: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5815/daemonsets","resourceVersion":"19126314"},"items":null} May 25 21:56:09.605: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5815/pods","resourceVersion":"19126314"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:56:09.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-5815" for this suite. • [SLOW TEST:30.379 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":278,"completed":134,"skipped":2113,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:56:09.623: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:56:09.778: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-5369" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":278,"completed":135,"skipped":2146,"failed":0} SSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:56:09.790: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-321.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-321.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 25 21:56:15.929: INFO: DNS probes using dns-321/dns-test-589b0856-a92e-43cf-ba45-af434e064a8e succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:56:15.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-321" for this suite. • [SLOW TEST:6.209 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":278,"completed":136,"skipped":2157,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:56:16.000: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 25 21:56:16.509: INFO: Waiting up to 5m0s for pod "busybox-user-65534-3b38d5c3-b8b0-4960-8dcd-d5d8b0bfa147" in namespace "security-context-test-2085" to be "success or failure" May 25 21:56:16.545: INFO: Pod "busybox-user-65534-3b38d5c3-b8b0-4960-8dcd-d5d8b0bfa147": Phase="Pending", Reason="", readiness=false. Elapsed: 35.655861ms May 25 21:56:18.657: INFO: Pod "busybox-user-65534-3b38d5c3-b8b0-4960-8dcd-d5d8b0bfa147": Phase="Pending", Reason="", readiness=false. Elapsed: 2.147821803s May 25 21:56:20.661: INFO: Pod "busybox-user-65534-3b38d5c3-b8b0-4960-8dcd-d5d8b0bfa147": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.152021197s May 25 21:56:20.661: INFO: Pod "busybox-user-65534-3b38d5c3-b8b0-4960-8dcd-d5d8b0bfa147" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:56:20.661: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-2085" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":137,"skipped":2212,"failed":0} SSSS ------------------------------ [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:56:20.670: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: validating api versions May 25 21:56:20.739: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' May 25 21:56:20.968: INFO: stderr: "" May 25 21:56:20.968: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:56:20.968: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5553" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":278,"completed":138,"skipped":2216,"failed":0} S ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:56:20.976: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-map-e7120e9e-9058-4b31-9a75-ea8d9bfead11 STEP: Creating a pod to test consume secrets May 25 21:56:21.108: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-872f260a-58fd-4dba-b46b-908c678d26ec" in namespace "projected-733" to be "success or failure" May 25 21:56:21.172: INFO: Pod "pod-projected-secrets-872f260a-58fd-4dba-b46b-908c678d26ec": Phase="Pending", Reason="", readiness=false. Elapsed: 63.480674ms May 25 21:56:23.176: INFO: Pod "pod-projected-secrets-872f260a-58fd-4dba-b46b-908c678d26ec": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067889309s May 25 21:56:25.180: INFO: Pod "pod-projected-secrets-872f260a-58fd-4dba-b46b-908c678d26ec": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.071605972s STEP: Saw pod success May 25 21:56:25.180: INFO: Pod "pod-projected-secrets-872f260a-58fd-4dba-b46b-908c678d26ec" satisfied condition "success or failure" May 25 21:56:25.182: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-872f260a-58fd-4dba-b46b-908c678d26ec container projected-secret-volume-test: STEP: delete the pod May 25 21:56:25.439: INFO: Waiting for pod pod-projected-secrets-872f260a-58fd-4dba-b46b-908c678d26ec to disappear May 25 21:56:25.451: INFO: Pod pod-projected-secrets-872f260a-58fd-4dba-b46b-908c678d26ec no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:56:25.451: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-733" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":139,"skipped":2217,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:56:25.458: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating replication controller my-hostname-basic-de977c11-b937-4d5c-80a8-e28381f4b276 May 25 21:56:25.588: INFO: Pod name my-hostname-basic-de977c11-b937-4d5c-80a8-e28381f4b276: Found 0 pods out of 1 May 25 21:56:30.591: INFO: Pod name my-hostname-basic-de977c11-b937-4d5c-80a8-e28381f4b276: Found 1 pods out of 1 May 25 21:56:30.591: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-de977c11-b937-4d5c-80a8-e28381f4b276" are running May 25 21:56:30.594: INFO: Pod "my-hostname-basic-de977c11-b937-4d5c-80a8-e28381f4b276-hn4r5" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-25 21:56:25 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-25 21:56:28 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-25 21:56:28 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-25 21:56:25 +0000 UTC Reason: Message:}]) May 25 21:56:30.594: INFO: Trying to dial the pod May 25 21:56:35.608: INFO: Controller my-hostname-basic-de977c11-b937-4d5c-80a8-e28381f4b276: Got expected result from replica 1 [my-hostname-basic-de977c11-b937-4d5c-80a8-e28381f4b276-hn4r5]: "my-hostname-basic-de977c11-b937-4d5c-80a8-e28381f4b276-hn4r5", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:56:35.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-5530" for this suite. • [SLOW TEST:10.156 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":278,"completed":140,"skipped":2239,"failed":0} SSSSS ------------------------------ [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:56:35.614: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods May 25 21:56:42.205: INFO: Successfully updated pod "adopt-release-ft7r8" STEP: Checking that the Job readopts the Pod May 25 21:56:42.205: INFO: Waiting up to 15m0s for pod "adopt-release-ft7r8" in namespace "job-7467" to be "adopted" May 25 21:56:42.228: INFO: Pod "adopt-release-ft7r8": Phase="Running", Reason="", readiness=true. Elapsed: 23.003926ms May 25 21:56:44.233: INFO: Pod "adopt-release-ft7r8": Phase="Running", Reason="", readiness=true. Elapsed: 2.027743372s May 25 21:56:44.233: INFO: Pod "adopt-release-ft7r8" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod May 25 21:56:44.742: INFO: Successfully updated pod "adopt-release-ft7r8" STEP: Checking that the Job releases the Pod May 25 21:56:44.742: INFO: Waiting up to 15m0s for pod "adopt-release-ft7r8" in namespace "job-7467" to be "released" May 25 21:56:44.749: INFO: Pod "adopt-release-ft7r8": Phase="Running", Reason="", readiness=true. Elapsed: 7.4889ms May 25 21:56:46.753: INFO: Pod "adopt-release-ft7r8": Phase="Running", Reason="", readiness=true. Elapsed: 2.010962917s May 25 21:56:46.753: INFO: Pod "adopt-release-ft7r8" satisfied condition "released" [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:56:46.753: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-7467" for this suite. • [SLOW TEST:11.146 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":278,"completed":141,"skipped":2244,"failed":0} SSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:56:46.760: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service externalname-service with the type=ExternalName in namespace services-8359 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-8359 I0525 21:56:47.163855 6 runners.go:189] Created replication controller with name: externalname-service, namespace: services-8359, replica count: 2 I0525 21:56:50.214326 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0525 21:56:53.214580 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 25 21:56:53.214: INFO: Creating new exec pod May 25 21:56:58.230: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-8359 execpodgfbkk -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' May 25 21:56:58.469: INFO: stderr: "I0525 21:56:58.375232 2094 log.go:172] (0xc0009829a0) (0xc0005b6000) Create stream\nI0525 21:56:58.375298 2094 log.go:172] (0xc0009829a0) (0xc0005b6000) Stream added, broadcasting: 1\nI0525 21:56:58.377569 2094 log.go:172] (0xc0009829a0) Reply frame received for 1\nI0525 21:56:58.377611 2094 log.go:172] (0xc0009829a0) (0xc0007e80a0) Create stream\nI0525 21:56:58.377640 2094 log.go:172] (0xc0009829a0) (0xc0007e80a0) Stream added, broadcasting: 3\nI0525 21:56:58.378552 2094 log.go:172] (0xc0009829a0) Reply frame received for 3\nI0525 21:56:58.378580 2094 log.go:172] (0xc0009829a0) (0xc0007e8140) Create stream\nI0525 21:56:58.378587 2094 log.go:172] (0xc0009829a0) (0xc0007e8140) Stream added, broadcasting: 5\nI0525 21:56:58.379272 2094 log.go:172] (0xc0009829a0) Reply frame received for 5\nI0525 21:56:58.442767 2094 log.go:172] (0xc0009829a0) Data frame received for 5\nI0525 21:56:58.442789 2094 log.go:172] (0xc0007e8140) (5) Data frame handling\nI0525 21:56:58.442801 2094 log.go:172] (0xc0007e8140) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0525 21:56:58.462674 2094 log.go:172] (0xc0009829a0) Data frame received for 3\nI0525 21:56:58.462695 2094 log.go:172] (0xc0007e80a0) (3) Data frame handling\nI0525 21:56:58.462724 2094 log.go:172] (0xc0009829a0) Data frame received for 5\nI0525 21:56:58.462745 2094 log.go:172] (0xc0007e8140) (5) Data frame handling\nI0525 21:56:58.462767 2094 log.go:172] (0xc0007e8140) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0525 21:56:58.462926 2094 log.go:172] (0xc0009829a0) Data frame received for 5\nI0525 21:56:58.462952 2094 log.go:172] (0xc0007e8140) (5) Data frame handling\nI0525 21:56:58.464143 2094 log.go:172] (0xc0009829a0) Data frame received for 1\nI0525 21:56:58.464159 2094 log.go:172] (0xc0005b6000) (1) Data frame handling\nI0525 21:56:58.464171 2094 log.go:172] (0xc0005b6000) (1) Data frame sent\nI0525 21:56:58.464188 2094 log.go:172] (0xc0009829a0) (0xc0005b6000) Stream removed, broadcasting: 1\nI0525 21:56:58.464263 2094 log.go:172] (0xc0009829a0) Go away received\nI0525 21:56:58.464422 2094 log.go:172] (0xc0009829a0) (0xc0005b6000) Stream removed, broadcasting: 1\nI0525 21:56:58.464434 2094 log.go:172] (0xc0009829a0) (0xc0007e80a0) Stream removed, broadcasting: 3\nI0525 21:56:58.464440 2094 log.go:172] (0xc0009829a0) (0xc0007e8140) Stream removed, broadcasting: 5\n" May 25 21:56:58.469: INFO: stdout: "" May 25 21:56:58.470: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-8359 execpodgfbkk -- /bin/sh -x -c nc -zv -t -w 2 10.109.59.217 80' May 25 21:56:58.703: INFO: stderr: "I0525 21:56:58.614688 2116 log.go:172] (0xc0000f5290) (0xc00089e000) Create stream\nI0525 21:56:58.614756 2116 log.go:172] (0xc0000f5290) (0xc00089e000) Stream added, broadcasting: 1\nI0525 21:56:58.618005 2116 log.go:172] (0xc0000f5290) Reply frame received for 1\nI0525 21:56:58.618048 2116 log.go:172] (0xc0000f5290) (0xc000a48000) Create stream\nI0525 21:56:58.618059 2116 log.go:172] (0xc0000f5290) (0xc000a48000) Stream added, broadcasting: 3\nI0525 21:56:58.619234 2116 log.go:172] (0xc0000f5290) Reply frame received for 3\nI0525 21:56:58.619261 2116 log.go:172] (0xc0000f5290) (0xc00089e0a0) Create stream\nI0525 21:56:58.619282 2116 log.go:172] (0xc0000f5290) (0xc00089e0a0) Stream added, broadcasting: 5\nI0525 21:56:58.620375 2116 log.go:172] (0xc0000f5290) Reply frame received for 5\nI0525 21:56:58.694637 2116 log.go:172] (0xc0000f5290) Data frame received for 3\nI0525 21:56:58.694668 2116 log.go:172] (0xc000a48000) (3) Data frame handling\nI0525 21:56:58.694704 2116 log.go:172] (0xc0000f5290) Data frame received for 5\nI0525 21:56:58.694717 2116 log.go:172] (0xc00089e0a0) (5) Data frame handling\nI0525 21:56:58.694730 2116 log.go:172] (0xc00089e0a0) (5) Data frame sent\nI0525 21:56:58.694741 2116 log.go:172] (0xc0000f5290) Data frame received for 5\nI0525 21:56:58.694752 2116 log.go:172] (0xc00089e0a0) (5) Data frame handling\n+ nc -zv -t -w 2 10.109.59.217 80\nConnection to 10.109.59.217 80 port [tcp/http] succeeded!\nI0525 21:56:58.696280 2116 log.go:172] (0xc0000f5290) Data frame received for 1\nI0525 21:56:58.696306 2116 log.go:172] (0xc00089e000) (1) Data frame handling\nI0525 21:56:58.696324 2116 log.go:172] (0xc00089e000) (1) Data frame sent\nI0525 21:56:58.696349 2116 log.go:172] (0xc0000f5290) (0xc00089e000) Stream removed, broadcasting: 1\nI0525 21:56:58.696374 2116 log.go:172] (0xc0000f5290) Go away received\nI0525 21:56:58.696785 2116 log.go:172] (0xc0000f5290) (0xc00089e000) Stream removed, broadcasting: 1\nI0525 21:56:58.696811 2116 log.go:172] (0xc0000f5290) (0xc000a48000) Stream removed, broadcasting: 3\nI0525 21:56:58.696824 2116 log.go:172] (0xc0000f5290) (0xc00089e0a0) Stream removed, broadcasting: 5\n" May 25 21:56:58.703: INFO: stdout: "" May 25 21:56:58.703: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-8359 execpodgfbkk -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.10 30075' May 25 21:56:58.896: INFO: stderr: "I0525 21:56:58.830062 2138 log.go:172] (0xc000b9e580) (0xc000942280) Create stream\nI0525 21:56:58.830124 2138 log.go:172] (0xc000b9e580) (0xc000942280) Stream added, broadcasting: 1\nI0525 21:56:58.832975 2138 log.go:172] (0xc000b9e580) Reply frame received for 1\nI0525 21:56:58.833051 2138 log.go:172] (0xc000b9e580) (0xc0006fa6e0) Create stream\nI0525 21:56:58.833077 2138 log.go:172] (0xc000b9e580) (0xc0006fa6e0) Stream added, broadcasting: 3\nI0525 21:56:58.834294 2138 log.go:172] (0xc000b9e580) Reply frame received for 3\nI0525 21:56:58.834333 2138 log.go:172] (0xc000b9e580) (0xc0006294a0) Create stream\nI0525 21:56:58.834345 2138 log.go:172] (0xc000b9e580) (0xc0006294a0) Stream added, broadcasting: 5\nI0525 21:56:58.835279 2138 log.go:172] (0xc000b9e580) Reply frame received for 5\nI0525 21:56:58.889420 2138 log.go:172] (0xc000b9e580) Data frame received for 3\nI0525 21:56:58.889451 2138 log.go:172] (0xc0006fa6e0) (3) Data frame handling\nI0525 21:56:58.889484 2138 log.go:172] (0xc000b9e580) Data frame received for 5\nI0525 21:56:58.889496 2138 log.go:172] (0xc0006294a0) (5) Data frame handling\nI0525 21:56:58.889541 2138 log.go:172] (0xc0006294a0) (5) Data frame sent\nI0525 21:56:58.889572 2138 log.go:172] (0xc000b9e580) Data frame received for 5\n+ nc -zv -t -w 2 172.17.0.10 30075\nConnection to 172.17.0.10 30075 port [tcp/30075] succeeded!\nI0525 21:56:58.889598 2138 log.go:172] (0xc0006294a0) (5) Data frame handling\nI0525 21:56:58.890997 2138 log.go:172] (0xc000b9e580) Data frame received for 1\nI0525 21:56:58.891029 2138 log.go:172] (0xc000942280) (1) Data frame handling\nI0525 21:56:58.891057 2138 log.go:172] (0xc000942280) (1) Data frame sent\nI0525 21:56:58.891083 2138 log.go:172] (0xc000b9e580) (0xc000942280) Stream removed, broadcasting: 1\nI0525 21:56:58.891100 2138 log.go:172] (0xc000b9e580) Go away received\nI0525 21:56:58.891590 2138 log.go:172] (0xc000b9e580) (0xc000942280) Stream removed, broadcasting: 1\nI0525 21:56:58.891627 2138 log.go:172] (0xc000b9e580) (0xc0006fa6e0) Stream removed, broadcasting: 3\nI0525 21:56:58.891647 2138 log.go:172] (0xc000b9e580) (0xc0006294a0) Stream removed, broadcasting: 5\n" May 25 21:56:58.897: INFO: stdout: "" May 25 21:56:58.897: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-8359 execpodgfbkk -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.8 30075' May 25 21:56:59.108: INFO: stderr: "I0525 21:56:59.028261 2160 log.go:172] (0xc00010b130) (0xc000687d60) Create stream\nI0525 21:56:59.028314 2160 log.go:172] (0xc00010b130) (0xc000687d60) Stream added, broadcasting: 1\nI0525 21:56:59.032329 2160 log.go:172] (0xc00010b130) Reply frame received for 1\nI0525 21:56:59.032376 2160 log.go:172] (0xc00010b130) (0xc00065e6e0) Create stream\nI0525 21:56:59.032390 2160 log.go:172] (0xc00010b130) (0xc00065e6e0) Stream added, broadcasting: 3\nI0525 21:56:59.033580 2160 log.go:172] (0xc00010b130) Reply frame received for 3\nI0525 21:56:59.033606 2160 log.go:172] (0xc00010b130) (0xc000028000) Create stream\nI0525 21:56:59.033615 2160 log.go:172] (0xc00010b130) (0xc000028000) Stream added, broadcasting: 5\nI0525 21:56:59.034417 2160 log.go:172] (0xc00010b130) Reply frame received for 5\nI0525 21:56:59.101750 2160 log.go:172] (0xc00010b130) Data frame received for 5\nI0525 21:56:59.101775 2160 log.go:172] (0xc000028000) (5) Data frame handling\nI0525 21:56:59.101789 2160 log.go:172] (0xc000028000) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.8 30075\nI0525 21:56:59.102043 2160 log.go:172] (0xc00010b130) Data frame received for 5\nI0525 21:56:59.102064 2160 log.go:172] (0xc000028000) (5) Data frame handling\nI0525 21:56:59.102080 2160 log.go:172] (0xc000028000) (5) Data frame sent\nConnection to 172.17.0.8 30075 port [tcp/30075] succeeded!\nI0525 21:56:59.102514 2160 log.go:172] (0xc00010b130) Data frame received for 3\nI0525 21:56:59.102539 2160 log.go:172] (0xc00065e6e0) (3) Data frame handling\nI0525 21:56:59.102578 2160 log.go:172] (0xc00010b130) Data frame received for 5\nI0525 21:56:59.102606 2160 log.go:172] (0xc000028000) (5) Data frame handling\nI0525 21:56:59.104169 2160 log.go:172] (0xc00010b130) Data frame received for 1\nI0525 21:56:59.104200 2160 log.go:172] (0xc000687d60) (1) Data frame handling\nI0525 21:56:59.104239 2160 log.go:172] (0xc000687d60) (1) Data frame sent\nI0525 21:56:59.104270 2160 log.go:172] (0xc00010b130) (0xc000687d60) Stream removed, broadcasting: 1\nI0525 21:56:59.104316 2160 log.go:172] (0xc00010b130) Go away received\nI0525 21:56:59.104648 2160 log.go:172] (0xc00010b130) (0xc000687d60) Stream removed, broadcasting: 1\nI0525 21:56:59.104668 2160 log.go:172] (0xc00010b130) (0xc00065e6e0) Stream removed, broadcasting: 3\nI0525 21:56:59.104679 2160 log.go:172] (0xc00010b130) (0xc000028000) Stream removed, broadcasting: 5\n" May 25 21:56:59.108: INFO: stdout: "" May 25 21:56:59.109: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:56:59.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8359" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:12.426 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":278,"completed":142,"skipped":2247,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:56:59.186: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 25 21:57:03.340: INFO: Waiting up to 5m0s for pod "client-envvars-eff75ba1-eedd-42a4-9729-e78e3171b598" in namespace "pods-2986" to be "success or failure" May 25 21:57:03.356: INFO: Pod "client-envvars-eff75ba1-eedd-42a4-9729-e78e3171b598": Phase="Pending", Reason="", readiness=false. Elapsed: 16.329729ms May 25 21:57:05.380: INFO: Pod "client-envvars-eff75ba1-eedd-42a4-9729-e78e3171b598": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04013173s May 25 21:57:07.384: INFO: Pod "client-envvars-eff75ba1-eedd-42a4-9729-e78e3171b598": Phase="Running", Reason="", readiness=true. Elapsed: 4.044041961s May 25 21:57:09.388: INFO: Pod "client-envvars-eff75ba1-eedd-42a4-9729-e78e3171b598": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.047870645s STEP: Saw pod success May 25 21:57:09.388: INFO: Pod "client-envvars-eff75ba1-eedd-42a4-9729-e78e3171b598" satisfied condition "success or failure" May 25 21:57:09.390: INFO: Trying to get logs from node jerma-worker2 pod client-envvars-eff75ba1-eedd-42a4-9729-e78e3171b598 container env3cont: STEP: delete the pod May 25 21:57:09.442: INFO: Waiting for pod client-envvars-eff75ba1-eedd-42a4-9729-e78e3171b598 to disappear May 25 21:57:09.567: INFO: Pod client-envvars-eff75ba1-eedd-42a4-9729-e78e3171b598 no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:57:09.568: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2986" for this suite. • [SLOW TEST:10.389 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":278,"completed":143,"skipped":2262,"failed":0} SSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:57:09.576: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 May 25 21:57:09.623: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 25 21:57:09.646: INFO: Waiting for terminating namespaces to be deleted... May 25 21:57:09.648: INFO: Logging pods the kubelet thinks is on node jerma-worker before test May 25 21:57:09.652: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 25 21:57:09.652: INFO: Container kindnet-cni ready: true, restart count 0 May 25 21:57:09.652: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 25 21:57:09.652: INFO: Container kube-proxy ready: true, restart count 0 May 25 21:57:09.652: INFO: adopt-release-rtrhs from job-7467 started at 2020-05-25 21:56:35 +0000 UTC (1 container statuses recorded) May 25 21:57:09.652: INFO: Container c ready: true, restart count 0 May 25 21:57:09.652: INFO: adopt-release-2qphd from job-7467 started at 2020-05-25 21:56:44 +0000 UTC (1 container statuses recorded) May 25 21:57:09.652: INFO: Container c ready: true, restart count 0 May 25 21:57:09.652: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test May 25 21:57:09.657: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 25 21:57:09.657: INFO: Container kube-proxy ready: true, restart count 0 May 25 21:57:09.657: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) May 25 21:57:09.657: INFO: Container kube-hunter ready: false, restart count 0 May 25 21:57:09.657: INFO: adopt-release-ft7r8 from job-7467 started at 2020-05-25 21:56:35 +0000 UTC (1 container statuses recorded) May 25 21:57:09.657: INFO: Container c ready: true, restart count 0 May 25 21:57:09.657: INFO: server-envvars-79069e4c-dc10-428b-b156-f199bc60bbf4 from pods-2986 started at 2020-05-25 21:56:59 +0000 UTC (1 container statuses recorded) May 25 21:57:09.657: INFO: Container srv ready: true, restart count 0 May 25 21:57:09.657: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 25 21:57:09.657: INFO: Container kindnet-cni ready: true, restart count 0 May 25 21:57:09.657: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) May 25 21:57:09.657: INFO: Container kube-bench ready: false, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-880dd9ca-9bc3-4be9-a9b8-4d08e7570f8c 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-880dd9ca-9bc3-4be9-a9b8-4d08e7570f8c off the node jerma-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-880dd9ca-9bc3-4be9-a9b8-4d08e7570f8c [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:57:25.888: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-8688" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:16.321 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":278,"completed":144,"skipped":2270,"failed":0} [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:57:25.897: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 25 21:57:26.532: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 25 21:57:28.543: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726040646, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726040646, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726040646, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726040646, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 25 21:57:31.572: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:57:33.580: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3373" for this suite. STEP: Destroying namespace "webhook-3373-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.864 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":278,"completed":145,"skipped":2270,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:57:33.761: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-d3141ce3-7af4-4141-83f3-5b70553fe071 STEP: Creating a pod to test consume configMaps May 25 21:57:33.807: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e086cad7-4cb3-428d-9718-35ec93f970e3" in namespace "projected-3267" to be "success or failure" May 25 21:57:33.845: INFO: Pod "pod-projected-configmaps-e086cad7-4cb3-428d-9718-35ec93f970e3": Phase="Pending", Reason="", readiness=false. Elapsed: 37.781612ms May 25 21:57:35.847: INFO: Pod "pod-projected-configmaps-e086cad7-4cb3-428d-9718-35ec93f970e3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040416652s May 25 21:57:37.852: INFO: Pod "pod-projected-configmaps-e086cad7-4cb3-428d-9718-35ec93f970e3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.044659953s STEP: Saw pod success May 25 21:57:37.852: INFO: Pod "pod-projected-configmaps-e086cad7-4cb3-428d-9718-35ec93f970e3" satisfied condition "success or failure" May 25 21:57:37.854: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-e086cad7-4cb3-428d-9718-35ec93f970e3 container projected-configmap-volume-test: STEP: delete the pod May 25 21:57:37.886: INFO: Waiting for pod pod-projected-configmaps-e086cad7-4cb3-428d-9718-35ec93f970e3 to disappear May 25 21:57:37.902: INFO: Pod pod-projected-configmaps-e086cad7-4cb3-428d-9718-35ec93f970e3 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:57:37.902: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3267" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":146,"skipped":2285,"failed":0} SSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:57:37.911: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 25 21:57:37.976: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7ce85074-5d38-408a-a877-20e798b365ba" in namespace "downward-api-3422" to be "success or failure" May 25 21:57:38.017: INFO: Pod "downwardapi-volume-7ce85074-5d38-408a-a877-20e798b365ba": Phase="Pending", Reason="", readiness=false. Elapsed: 41.74556ms May 25 21:57:40.021: INFO: Pod "downwardapi-volume-7ce85074-5d38-408a-a877-20e798b365ba": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045746363s May 25 21:57:42.026: INFO: Pod "downwardapi-volume-7ce85074-5d38-408a-a877-20e798b365ba": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.050501286s STEP: Saw pod success May 25 21:57:42.026: INFO: Pod "downwardapi-volume-7ce85074-5d38-408a-a877-20e798b365ba" satisfied condition "success or failure" May 25 21:57:42.029: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-7ce85074-5d38-408a-a877-20e798b365ba container client-container: STEP: delete the pod May 25 21:57:42.047: INFO: Waiting for pod downwardapi-volume-7ce85074-5d38-408a-a877-20e798b365ba to disappear May 25 21:57:42.051: INFO: Pod downwardapi-volume-7ce85074-5d38-408a-a877-20e798b365ba no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:57:42.051: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3422" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":147,"skipped":2288,"failed":0} SS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:57:42.058: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 25 21:57:42.878: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 25 21:57:44.962: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726040663, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726040663, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726040663, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726040662, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 25 21:57:48.023: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:58:00.263: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8455" for this suite. STEP: Destroying namespace "webhook-8455-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:18.306 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":278,"completed":148,"skipped":2290,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:58:00.365: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 25 21:58:00.490: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-0f689336-c936-47e2-a557-0cff964f427d" in namespace "security-context-test-6307" to be "success or failure" May 25 21:58:00.494: INFO: Pod "busybox-readonly-false-0f689336-c936-47e2-a557-0cff964f427d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.883677ms May 25 21:58:02.497: INFO: Pod "busybox-readonly-false-0f689336-c936-47e2-a557-0cff964f427d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007415993s May 25 21:58:04.502: INFO: Pod "busybox-readonly-false-0f689336-c936-47e2-a557-0cff964f427d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012640004s May 25 21:58:04.503: INFO: Pod "busybox-readonly-false-0f689336-c936-47e2-a557-0cff964f427d" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:58:04.503: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-6307" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":278,"completed":149,"skipped":2301,"failed":0} S ------------------------------ [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:58:04.511: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating Agnhost RC May 25 21:58:04.598: INFO: namespace kubectl-8388 May 25 21:58:04.598: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8388' May 25 21:58:06.000: INFO: stderr: "" May 25 21:58:06.000: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. May 25 21:58:07.004: INFO: Selector matched 1 pods for map[app:agnhost] May 25 21:58:07.004: INFO: Found 0 / 1 May 25 21:58:08.011: INFO: Selector matched 1 pods for map[app:agnhost] May 25 21:58:08.011: INFO: Found 0 / 1 May 25 21:58:09.005: INFO: Selector matched 1 pods for map[app:agnhost] May 25 21:58:09.005: INFO: Found 0 / 1 May 25 21:58:10.023: INFO: Selector matched 1 pods for map[app:agnhost] May 25 21:58:10.023: INFO: Found 1 / 1 May 25 21:58:10.023: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 25 21:58:10.035: INFO: Selector matched 1 pods for map[app:agnhost] May 25 21:58:10.036: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 25 21:58:10.036: INFO: wait on agnhost-master startup in kubectl-8388 May 25 21:58:10.036: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs agnhost-master-ntkdj agnhost-master --namespace=kubectl-8388' May 25 21:58:10.226: INFO: stderr: "" May 25 21:58:10.226: INFO: stdout: "Paused\n" STEP: exposing RC May 25 21:58:10.226: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-8388' May 25 21:58:10.362: INFO: stderr: "" May 25 21:58:10.363: INFO: stdout: "service/rm2 exposed\n" May 25 21:58:10.387: INFO: Service rm2 in namespace kubectl-8388 found. STEP: exposing service May 25 21:58:12.391: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-8388' May 25 21:58:12.543: INFO: stderr: "" May 25 21:58:12.543: INFO: stdout: "service/rm3 exposed\n" May 25 21:58:12.548: INFO: Service rm3 in namespace kubectl-8388 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:58:14.556: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8388" for this suite. • [SLOW TEST:10.053 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1188 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":278,"completed":150,"skipped":2302,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:58:14.565: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 25 21:58:14.667: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8a1a02a0-ecaf-48ef-9591-f0f37aa8d37c" in namespace "downward-api-4235" to be "success or failure" May 25 21:58:14.728: INFO: Pod "downwardapi-volume-8a1a02a0-ecaf-48ef-9591-f0f37aa8d37c": Phase="Pending", Reason="", readiness=false. Elapsed: 61.108693ms May 25 21:58:16.732: INFO: Pod "downwardapi-volume-8a1a02a0-ecaf-48ef-9591-f0f37aa8d37c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065566252s May 25 21:58:18.737: INFO: Pod "downwardapi-volume-8a1a02a0-ecaf-48ef-9591-f0f37aa8d37c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.07028035s STEP: Saw pod success May 25 21:58:18.737: INFO: Pod "downwardapi-volume-8a1a02a0-ecaf-48ef-9591-f0f37aa8d37c" satisfied condition "success or failure" May 25 21:58:18.740: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-8a1a02a0-ecaf-48ef-9591-f0f37aa8d37c container client-container: STEP: delete the pod May 25 21:58:18.798: INFO: Waiting for pod downwardapi-volume-8a1a02a0-ecaf-48ef-9591-f0f37aa8d37c to disappear May 25 21:58:18.802: INFO: Pod downwardapi-volume-8a1a02a0-ecaf-48ef-9591-f0f37aa8d37c no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:58:18.802: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4235" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":151,"skipped":2321,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:58:18.815: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-ee4c1b15-71e8-45be-b620-23d4903868d7 STEP: Creating a pod to test consume configMaps May 25 21:58:18.973: INFO: Waiting up to 5m0s for pod "pod-configmaps-54900664-2bd0-4048-a37d-3b20cdd19545" in namespace "configmap-4838" to be "success or failure" May 25 21:58:18.976: INFO: Pod "pod-configmaps-54900664-2bd0-4048-a37d-3b20cdd19545": Phase="Pending", Reason="", readiness=false. Elapsed: 2.65987ms May 25 21:58:21.059: INFO: Pod "pod-configmaps-54900664-2bd0-4048-a37d-3b20cdd19545": Phase="Pending", Reason="", readiness=false. Elapsed: 2.085963821s May 25 21:58:23.064: INFO: Pod "pod-configmaps-54900664-2bd0-4048-a37d-3b20cdd19545": Phase="Pending", Reason="", readiness=false. Elapsed: 4.090266917s May 25 21:58:25.068: INFO: Pod "pod-configmaps-54900664-2bd0-4048-a37d-3b20cdd19545": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.09451704s STEP: Saw pod success May 25 21:58:25.068: INFO: Pod "pod-configmaps-54900664-2bd0-4048-a37d-3b20cdd19545" satisfied condition "success or failure" May 25 21:58:25.071: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-54900664-2bd0-4048-a37d-3b20cdd19545 container configmap-volume-test: STEP: delete the pod May 25 21:58:25.096: INFO: Waiting for pod pod-configmaps-54900664-2bd0-4048-a37d-3b20cdd19545 to disappear May 25 21:58:25.105: INFO: Pod pod-configmaps-54900664-2bd0-4048-a37d-3b20cdd19545 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:58:25.105: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4838" for this suite. • [SLOW TEST:6.297 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":152,"skipped":2350,"failed":0} SSSSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:58:25.112: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 25 21:58:25.232: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) May 25 21:58:25.279: INFO: Pod name sample-pod: Found 0 pods out of 1 May 25 21:58:30.302: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 25 21:58:30.303: INFO: Creating deployment "test-rolling-update-deployment" May 25 21:58:30.316: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has May 25 21:58:30.327: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created May 25 21:58:32.336: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected May 25 21:58:32.348: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726040710, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726040710, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726040710, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726040710, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)} May 25 21:58:34.353: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 May 25 21:58:34.366: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-1972 /apis/apps/v1/namespaces/deployment-1972/deployments/test-rolling-update-deployment 2f2efc7e-c981-4f1d-99fb-0c1e924c9b33 19127540 1 2020-05-25 21:58:30 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0050828e8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-05-25 21:58:30 +0000 UTC,LastTransitionTime:2020-05-25 21:58:30 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-67cf4f6444" has successfully progressed.,LastUpdateTime:2020-05-25 21:58:33 +0000 UTC,LastTransitionTime:2020-05-25 21:58:30 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} May 25 21:58:34.372: INFO: New ReplicaSet "test-rolling-update-deployment-67cf4f6444" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-67cf4f6444 deployment-1972 /apis/apps/v1/namespaces/deployment-1972/replicasets/test-rolling-update-deployment-67cf4f6444 ef029ce3-d8d1-4230-95a7-897fb77c5b23 19127529 1 2020-05-25 21:58:30 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment 2f2efc7e-c981-4f1d-99fb-0c1e924c9b33 0xc005082d87 0xc005082d88}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 67cf4f6444,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc005082df8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} May 25 21:58:34.372: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": May 25 21:58:34.372: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-1972 /apis/apps/v1/namespaces/deployment-1972/replicasets/test-rolling-update-controller 87cd614d-81f1-4e0e-a710-34de0ed2aa8f 19127538 2 2020-05-25 21:58:25 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment 2f2efc7e-c981-4f1d-99fb-0c1e924c9b33 0xc005082cb7 0xc005082cb8}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc005082d18 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 25 21:58:34.375: INFO: Pod "test-rolling-update-deployment-67cf4f6444-5d8kb" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-67cf4f6444-5d8kb test-rolling-update-deployment-67cf4f6444- deployment-1972 /api/v1/namespaces/deployment-1972/pods/test-rolling-update-deployment-67cf4f6444-5d8kb c6221784-eab4-4fbb-9136-9f1b1beff228 19127528 0 2020-05-25 21:58:30 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-67cf4f6444 ef029ce3-d8d1-4230-95a7-897fb77c5b23 0xc00236e737 0xc00236e738}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4r5dh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4r5dh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4r5dh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 21:58:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 21:58:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 21:58:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 21:58:30 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.198,StartTime:2020-05-25 21:58:30 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-25 21:58:32 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://7f629b9d0d90d3c0ee93879cc753ed6bc27283e6f5ee542991b216144e608d1d,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.198,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:58:34.375: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-1972" for this suite. • [SLOW TEST:9.270 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":153,"skipped":2361,"failed":0} SS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:58:34.383: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 25 21:58:34.482: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. May 25 21:58:34.492: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 21:58:34.512: INFO: Number of nodes with available pods: 0 May 25 21:58:34.512: INFO: Node jerma-worker is running more than one daemon pod May 25 21:58:35.517: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 21:58:35.520: INFO: Number of nodes with available pods: 0 May 25 21:58:35.520: INFO: Node jerma-worker is running more than one daemon pod May 25 21:58:36.627: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 21:58:36.636: INFO: Number of nodes with available pods: 0 May 25 21:58:36.636: INFO: Node jerma-worker is running more than one daemon pod May 25 21:58:37.516: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 21:58:37.519: INFO: Number of nodes with available pods: 0 May 25 21:58:37.519: INFO: Node jerma-worker is running more than one daemon pod May 25 21:58:38.517: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 21:58:38.523: INFO: Number of nodes with available pods: 2 May 25 21:58:38.523: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. May 25 21:58:38.576: INFO: Wrong image for pod: daemon-set-69g7b. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 25 21:58:38.576: INFO: Wrong image for pod: daemon-set-dn5p2. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 25 21:58:38.582: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 21:58:39.599: INFO: Wrong image for pod: daemon-set-69g7b. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 25 21:58:39.599: INFO: Wrong image for pod: daemon-set-dn5p2. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 25 21:58:39.609: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 21:58:40.587: INFO: Wrong image for pod: daemon-set-69g7b. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 25 21:58:40.587: INFO: Wrong image for pod: daemon-set-dn5p2. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 25 21:58:40.591: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 21:58:41.587: INFO: Wrong image for pod: daemon-set-69g7b. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 25 21:58:41.587: INFO: Wrong image for pod: daemon-set-dn5p2. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 25 21:58:41.590: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 21:58:42.586: INFO: Wrong image for pod: daemon-set-69g7b. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 25 21:58:42.586: INFO: Wrong image for pod: daemon-set-dn5p2. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 25 21:58:42.586: INFO: Pod daemon-set-dn5p2 is not available May 25 21:58:42.590: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 21:58:43.587: INFO: Wrong image for pod: daemon-set-69g7b. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 25 21:58:43.588: INFO: Wrong image for pod: daemon-set-dn5p2. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 25 21:58:43.588: INFO: Pod daemon-set-dn5p2 is not available May 25 21:58:43.592: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 21:58:44.587: INFO: Wrong image for pod: daemon-set-69g7b. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 25 21:58:44.587: INFO: Wrong image for pod: daemon-set-dn5p2. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 25 21:58:44.587: INFO: Pod daemon-set-dn5p2 is not available May 25 21:58:44.591: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 21:58:45.587: INFO: Wrong image for pod: daemon-set-69g7b. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 25 21:58:45.587: INFO: Wrong image for pod: daemon-set-dn5p2. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 25 21:58:45.587: INFO: Pod daemon-set-dn5p2 is not available May 25 21:58:45.592: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 21:58:46.587: INFO: Wrong image for pod: daemon-set-69g7b. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 25 21:58:46.588: INFO: Wrong image for pod: daemon-set-dn5p2. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 25 21:58:46.588: INFO: Pod daemon-set-dn5p2 is not available May 25 21:58:46.592: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 21:58:47.586: INFO: Wrong image for pod: daemon-set-69g7b. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 25 21:58:47.586: INFO: Wrong image for pod: daemon-set-dn5p2. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 25 21:58:47.586: INFO: Pod daemon-set-dn5p2 is not available May 25 21:58:47.590: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 21:58:48.587: INFO: Wrong image for pod: daemon-set-69g7b. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 25 21:58:48.587: INFO: Wrong image for pod: daemon-set-dn5p2. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 25 21:58:48.587: INFO: Pod daemon-set-dn5p2 is not available May 25 21:58:48.591: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 21:58:49.622: INFO: Wrong image for pod: daemon-set-69g7b. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 25 21:58:49.622: INFO: Pod daemon-set-6n4ss is not available May 25 21:58:49.625: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 21:58:50.586: INFO: Wrong image for pod: daemon-set-69g7b. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 25 21:58:50.586: INFO: Pod daemon-set-6n4ss is not available May 25 21:58:50.590: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 21:58:51.586: INFO: Wrong image for pod: daemon-set-69g7b. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 25 21:58:51.586: INFO: Pod daemon-set-6n4ss is not available May 25 21:58:51.589: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 21:58:52.586: INFO: Wrong image for pod: daemon-set-69g7b. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 25 21:58:52.608: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 21:58:53.587: INFO: Wrong image for pod: daemon-set-69g7b. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 25 21:58:53.591: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 21:58:54.585: INFO: Wrong image for pod: daemon-set-69g7b. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 25 21:58:54.585: INFO: Pod daemon-set-69g7b is not available May 25 21:58:54.588: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 21:58:55.588: INFO: Pod daemon-set-bwsz5 is not available May 25 21:58:55.592: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. May 25 21:58:55.611: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 21:58:55.614: INFO: Number of nodes with available pods: 1 May 25 21:58:55.614: INFO: Node jerma-worker is running more than one daemon pod May 25 21:58:56.618: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 21:58:56.621: INFO: Number of nodes with available pods: 1 May 25 21:58:56.621: INFO: Node jerma-worker is running more than one daemon pod May 25 21:58:57.620: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 21:58:57.623: INFO: Number of nodes with available pods: 1 May 25 21:58:57.624: INFO: Node jerma-worker is running more than one daemon pod May 25 21:58:58.636: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 21:58:58.639: INFO: Number of nodes with available pods: 2 May 25 21:58:58.639: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8587, will wait for the garbage collector to delete the pods May 25 21:58:58.712: INFO: Deleting DaemonSet.extensions daemon-set took: 6.085144ms May 25 21:58:58.812: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.239387ms May 25 21:59:09.316: INFO: Number of nodes with available pods: 0 May 25 21:59:09.316: INFO: Number of running nodes: 0, number of available pods: 0 May 25 21:59:09.318: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8587/daemonsets","resourceVersion":"19127751"},"items":null} May 25 21:59:09.321: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8587/pods","resourceVersion":"19127751"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:59:09.330: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-8587" for this suite. • [SLOW TEST:34.952 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":278,"completed":154,"skipped":2363,"failed":0} SSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:59:09.335: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container May 25 21:59:13.556: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-14 PodName:pod-sharedvolume-fe611312-702f-4be4-9fbb-8dfb485913de ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 25 21:59:13.556: INFO: >>> kubeConfig: /root/.kube/config I0525 21:59:13.586274 6 log.go:172] (0xc00142bd90) (0xc0027bfc20) Create stream I0525 21:59:13.586318 6 log.go:172] (0xc00142bd90) (0xc0027bfc20) Stream added, broadcasting: 1 I0525 21:59:13.588293 6 log.go:172] (0xc00142bd90) Reply frame received for 1 I0525 21:59:13.588334 6 log.go:172] (0xc00142bd90) (0xc0027bfcc0) Create stream I0525 21:59:13.588348 6 log.go:172] (0xc00142bd90) (0xc0027bfcc0) Stream added, broadcasting: 3 I0525 21:59:13.589408 6 log.go:172] (0xc00142bd90) Reply frame received for 3 I0525 21:59:13.589511 6 log.go:172] (0xc00142bd90) (0xc002835360) Create stream I0525 21:59:13.589527 6 log.go:172] (0xc00142bd90) (0xc002835360) Stream added, broadcasting: 5 I0525 21:59:13.590517 6 log.go:172] (0xc00142bd90) Reply frame received for 5 I0525 21:59:13.649020 6 log.go:172] (0xc00142bd90) Data frame received for 3 I0525 21:59:13.649057 6 log.go:172] (0xc0027bfcc0) (3) Data frame handling I0525 21:59:13.649071 6 log.go:172] (0xc0027bfcc0) (3) Data frame sent I0525 21:59:13.649082 6 log.go:172] (0xc00142bd90) Data frame received for 3 I0525 21:59:13.649091 6 log.go:172] (0xc0027bfcc0) (3) Data frame handling I0525 21:59:13.649273 6 log.go:172] (0xc00142bd90) Data frame received for 5 I0525 21:59:13.649295 6 log.go:172] (0xc002835360) (5) Data frame handling I0525 21:59:13.650367 6 log.go:172] (0xc00142bd90) Data frame received for 1 I0525 21:59:13.650388 6 log.go:172] (0xc0027bfc20) (1) Data frame handling I0525 21:59:13.650410 6 log.go:172] (0xc0027bfc20) (1) Data frame sent I0525 21:59:13.650479 6 log.go:172] (0xc00142bd90) (0xc0027bfc20) Stream removed, broadcasting: 1 I0525 21:59:13.650584 6 log.go:172] (0xc00142bd90) (0xc0027bfc20) Stream removed, broadcasting: 1 I0525 21:59:13.650605 6 log.go:172] (0xc00142bd90) (0xc0027bfcc0) Stream removed, broadcasting: 3 I0525 21:59:13.650619 6 log.go:172] (0xc00142bd90) (0xc002835360) Stream removed, broadcasting: 5 May 25 21:59:13.650: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 I0525 21:59:13.650685 6 log.go:172] (0xc00142bd90) Go away received May 25 21:59:13.650: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-14" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":278,"completed":155,"skipped":2367,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:59:13.663: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1275 STEP: creating the pod May 25 21:59:13.713: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-900' May 25 21:59:14.499: INFO: stderr: "" May 25 21:59:14.499: INFO: stdout: "pod/pause created\n" May 25 21:59:14.499: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] May 25 21:59:14.499: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-900" to be "running and ready" May 25 21:59:14.519: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 19.350697ms May 25 21:59:16.523: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023704003s May 25 21:59:18.527: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.02804691s May 25 21:59:18.528: INFO: Pod "pause" satisfied condition "running and ready" May 25 21:59:18.528: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: adding the label testing-label with value testing-label-value to a pod May 25 21:59:18.528: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-900' May 25 21:59:18.631: INFO: stderr: "" May 25 21:59:18.631: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value May 25 21:59:18.631: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-900' May 25 21:59:18.722: INFO: stderr: "" May 25 21:59:18.722: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s testing-label-value\n" STEP: removing the label testing-label of a pod May 25 21:59:18.722: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-900' May 25 21:59:18.857: INFO: stderr: "" May 25 21:59:18.857: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label May 25 21:59:18.857: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-900' May 25 21:59:18.951: INFO: stderr: "" May 25 21:59:18.951: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s \n" [AfterEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1282 STEP: using delete to clean up resources May 25 21:59:18.951: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-900' May 25 21:59:19.067: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 25 21:59:19.067: INFO: stdout: "pod \"pause\" force deleted\n" May 25 21:59:19.067: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-900' May 25 21:59:19.165: INFO: stderr: "No resources found in kubectl-900 namespace.\n" May 25 21:59:19.165: INFO: stdout: "" May 25 21:59:19.166: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-900 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 25 21:59:19.254: INFO: stderr: "" May 25 21:59:19.254: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:59:19.254: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-900" for this suite. • [SLOW TEST:5.598 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1272 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":278,"completed":156,"skipped":2382,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:59:19.262: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:59:26.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9788" for this suite. • [SLOW TEST:7.314 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":278,"completed":157,"skipped":2423,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:59:26.576: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-f4999902-7a3a-41f1-a26b-4bd4041677a4 STEP: Creating a pod to test consume configMaps May 25 21:59:26.645: INFO: Waiting up to 5m0s for pod "pod-configmaps-f11a3402-9a62-4f79-b8cf-ee864c1b1d7a" in namespace "configmap-7738" to be "success or failure" May 25 21:59:26.663: INFO: Pod "pod-configmaps-f11a3402-9a62-4f79-b8cf-ee864c1b1d7a": Phase="Pending", Reason="", readiness=false. Elapsed: 17.702588ms May 25 21:59:28.667: INFO: Pod "pod-configmaps-f11a3402-9a62-4f79-b8cf-ee864c1b1d7a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021906803s May 25 21:59:30.671: INFO: Pod "pod-configmaps-f11a3402-9a62-4f79-b8cf-ee864c1b1d7a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02640608s STEP: Saw pod success May 25 21:59:30.671: INFO: Pod "pod-configmaps-f11a3402-9a62-4f79-b8cf-ee864c1b1d7a" satisfied condition "success or failure" May 25 21:59:30.674: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-f11a3402-9a62-4f79-b8cf-ee864c1b1d7a container configmap-volume-test: STEP: delete the pod May 25 21:59:30.713: INFO: Waiting for pod pod-configmaps-f11a3402-9a62-4f79-b8cf-ee864c1b1d7a to disappear May 25 21:59:30.726: INFO: Pod pod-configmaps-f11a3402-9a62-4f79-b8cf-ee864c1b1d7a no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:59:30.726: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7738" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":158,"skipped":2434,"failed":0} SSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:59:30.734: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 25 21:59:30.886: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 21:59:30.912: INFO: Number of nodes with available pods: 0 May 25 21:59:30.912: INFO: Node jerma-worker is running more than one daemon pod May 25 21:59:31.917: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 21:59:31.928: INFO: Number of nodes with available pods: 0 May 25 21:59:31.928: INFO: Node jerma-worker is running more than one daemon pod May 25 21:59:33.056: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 21:59:33.060: INFO: Number of nodes with available pods: 0 May 25 21:59:33.060: INFO: Node jerma-worker is running more than one daemon pod May 25 21:59:34.079: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 21:59:34.082: INFO: Number of nodes with available pods: 0 May 25 21:59:34.082: INFO: Node jerma-worker is running more than one daemon pod May 25 21:59:34.947: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 21:59:34.950: INFO: Number of nodes with available pods: 2 May 25 21:59:34.950: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. May 25 21:59:34.976: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 21:59:34.979: INFO: Number of nodes with available pods: 1 May 25 21:59:34.979: INFO: Node jerma-worker is running more than one daemon pod May 25 21:59:36.003: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 21:59:36.006: INFO: Number of nodes with available pods: 1 May 25 21:59:36.006: INFO: Node jerma-worker is running more than one daemon pod May 25 21:59:36.983: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 21:59:36.985: INFO: Number of nodes with available pods: 1 May 25 21:59:36.985: INFO: Node jerma-worker is running more than one daemon pod May 25 21:59:37.984: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 21:59:37.987: INFO: Number of nodes with available pods: 1 May 25 21:59:37.987: INFO: Node jerma-worker is running more than one daemon pod May 25 21:59:38.984: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 21:59:38.987: INFO: Number of nodes with available pods: 1 May 25 21:59:38.987: INFO: Node jerma-worker is running more than one daemon pod May 25 21:59:39.984: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 21:59:39.987: INFO: Number of nodes with available pods: 1 May 25 21:59:39.987: INFO: Node jerma-worker is running more than one daemon pod May 25 21:59:40.984: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 21:59:40.986: INFO: Number of nodes with available pods: 1 May 25 21:59:40.986: INFO: Node jerma-worker is running more than one daemon pod May 25 21:59:41.984: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 21:59:41.989: INFO: Number of nodes with available pods: 1 May 25 21:59:41.989: INFO: Node jerma-worker is running more than one daemon pod May 25 21:59:42.990: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 21:59:42.993: INFO: Number of nodes with available pods: 1 May 25 21:59:42.993: INFO: Node jerma-worker is running more than one daemon pod May 25 21:59:43.984: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 21:59:43.989: INFO: Number of nodes with available pods: 1 May 25 21:59:43.989: INFO: Node jerma-worker is running more than one daemon pod May 25 21:59:44.983: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 21:59:44.985: INFO: Number of nodes with available pods: 1 May 25 21:59:44.985: INFO: Node jerma-worker is running more than one daemon pod May 25 21:59:45.984: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 21:59:45.987: INFO: Number of nodes with available pods: 1 May 25 21:59:45.987: INFO: Node jerma-worker is running more than one daemon pod May 25 21:59:47.014: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 21:59:47.018: INFO: Number of nodes with available pods: 1 May 25 21:59:47.018: INFO: Node jerma-worker is running more than one daemon pod May 25 21:59:47.984: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 21:59:47.988: INFO: Number of nodes with available pods: 1 May 25 21:59:47.988: INFO: Node jerma-worker is running more than one daemon pod May 25 21:59:48.984: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 21:59:48.988: INFO: Number of nodes with available pods: 1 May 25 21:59:48.988: INFO: Node jerma-worker is running more than one daemon pod May 25 21:59:49.984: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 21:59:49.987: INFO: Number of nodes with available pods: 1 May 25 21:59:49.987: INFO: Node jerma-worker is running more than one daemon pod May 25 21:59:50.984: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 21:59:50.987: INFO: Number of nodes with available pods: 1 May 25 21:59:50.987: INFO: Node jerma-worker is running more than one daemon pod May 25 21:59:51.984: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 21:59:51.987: INFO: Number of nodes with available pods: 1 May 25 21:59:51.987: INFO: Node jerma-worker is running more than one daemon pod May 25 21:59:52.983: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 21:59:52.987: INFO: Number of nodes with available pods: 2 May 25 21:59:52.987: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3251, will wait for the garbage collector to delete the pods May 25 21:59:53.049: INFO: Deleting DaemonSet.extensions daemon-set took: 6.620545ms May 25 21:59:53.349: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.576951ms May 25 21:59:59.252: INFO: Number of nodes with available pods: 0 May 25 21:59:59.252: INFO: Number of running nodes: 0, number of available pods: 0 May 25 21:59:59.288: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3251/daemonsets","resourceVersion":"19128077"},"items":null} May 25 21:59:59.290: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3251/pods","resourceVersion":"19128077"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 21:59:59.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-3251" for this suite. • [SLOW TEST:28.571 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":278,"completed":159,"skipped":2437,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 21:59:59.306: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with configMap that has name projected-configmap-test-upd-3ecc3bae-0242-41ff-9ef9-c7362b511b37 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-3ecc3bae-0242-41ff-9ef9-c7362b511b37 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 22:00:05.456: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8372" for this suite. • [SLOW TEST:6.165 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":160,"skipped":2463,"failed":0} SS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 22:00:05.471: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-6594 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-6594 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-6594 May 25 22:00:05.539: INFO: Found 0 stateful pods, waiting for 1 May 25 22:00:15.544: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod May 25 22:00:15.548: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6594 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 25 22:00:15.807: INFO: stderr: "I0525 22:00:15.679107 2432 log.go:172] (0xc0005d9130) (0xc0005e5c20) Create stream\nI0525 22:00:15.679176 2432 log.go:172] (0xc0005d9130) (0xc0005e5c20) Stream added, broadcasting: 1\nI0525 22:00:15.682020 2432 log.go:172] (0xc0005d9130) Reply frame received for 1\nI0525 22:00:15.682082 2432 log.go:172] (0xc0005d9130) (0xc00092a000) Create stream\nI0525 22:00:15.682119 2432 log.go:172] (0xc0005d9130) (0xc00092a000) Stream added, broadcasting: 3\nI0525 22:00:15.683399 2432 log.go:172] (0xc0005d9130) Reply frame received for 3\nI0525 22:00:15.683441 2432 log.go:172] (0xc0005d9130) (0xc000564000) Create stream\nI0525 22:00:15.683456 2432 log.go:172] (0xc0005d9130) (0xc000564000) Stream added, broadcasting: 5\nI0525 22:00:15.684395 2432 log.go:172] (0xc0005d9130) Reply frame received for 5\nI0525 22:00:15.754940 2432 log.go:172] (0xc0005d9130) Data frame received for 5\nI0525 22:00:15.754964 2432 log.go:172] (0xc000564000) (5) Data frame handling\nI0525 22:00:15.754977 2432 log.go:172] (0xc000564000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0525 22:00:15.798700 2432 log.go:172] (0xc0005d9130) Data frame received for 3\nI0525 22:00:15.798747 2432 log.go:172] (0xc00092a000) (3) Data frame handling\nI0525 22:00:15.798785 2432 log.go:172] (0xc00092a000) (3) Data frame sent\nI0525 22:00:15.799001 2432 log.go:172] (0xc0005d9130) Data frame received for 5\nI0525 22:00:15.799033 2432 log.go:172] (0xc0005d9130) Data frame received for 3\nI0525 22:00:15.799058 2432 log.go:172] (0xc00092a000) (3) Data frame handling\nI0525 22:00:15.799081 2432 log.go:172] (0xc000564000) (5) Data frame handling\nI0525 22:00:15.801083 2432 log.go:172] (0xc0005d9130) Data frame received for 1\nI0525 22:00:15.801106 2432 log.go:172] (0xc0005e5c20) (1) Data frame handling\nI0525 22:00:15.801264 2432 log.go:172] (0xc0005e5c20) (1) Data frame sent\nI0525 22:00:15.801282 2432 log.go:172] (0xc0005d9130) (0xc0005e5c20) Stream removed, broadcasting: 1\nI0525 22:00:15.801301 2432 log.go:172] (0xc0005d9130) Go away received\nI0525 22:00:15.801877 2432 log.go:172] (0xc0005d9130) (0xc0005e5c20) Stream removed, broadcasting: 1\nI0525 22:00:15.801922 2432 log.go:172] (0xc0005d9130) (0xc00092a000) Stream removed, broadcasting: 3\nI0525 22:00:15.801942 2432 log.go:172] (0xc0005d9130) (0xc000564000) Stream removed, broadcasting: 5\n" May 25 22:00:15.807: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 25 22:00:15.807: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 25 22:00:15.810: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 25 22:00:25.815: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 25 22:00:25.815: INFO: Waiting for statefulset status.replicas updated to 0 May 25 22:00:25.829: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999535s May 25 22:00:26.834: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.99504739s May 25 22:00:27.839: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.990026355s May 25 22:00:28.843: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.985322051s May 25 22:00:29.848: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.981171623s May 25 22:00:30.853: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.97610994s May 25 22:00:31.856: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.971582284s May 25 22:00:32.860: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.967683737s May 25 22:00:33.865: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.964013076s May 25 22:00:34.868: INFO: Verifying statefulset ss doesn't scale past 1 for another 959.774617ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-6594 May 25 22:00:35.873: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6594 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 25 22:00:36.117: INFO: stderr: "I0525 22:00:36.021451 2455 log.go:172] (0xc0009f66e0) (0xc000942000) Create stream\nI0525 22:00:36.021535 2455 log.go:172] (0xc0009f66e0) (0xc000942000) Stream added, broadcasting: 1\nI0525 22:00:36.024054 2455 log.go:172] (0xc0009f66e0) Reply frame received for 1\nI0525 22:00:36.024111 2455 log.go:172] (0xc0009f66e0) (0xc0006d9ae0) Create stream\nI0525 22:00:36.024127 2455 log.go:172] (0xc0009f66e0) (0xc0006d9ae0) Stream added, broadcasting: 3\nI0525 22:00:36.024976 2455 log.go:172] (0xc0009f66e0) Reply frame received for 3\nI0525 22:00:36.025005 2455 log.go:172] (0xc0009f66e0) (0xc0009420a0) Create stream\nI0525 22:00:36.025017 2455 log.go:172] (0xc0009f66e0) (0xc0009420a0) Stream added, broadcasting: 5\nI0525 22:00:36.026213 2455 log.go:172] (0xc0009f66e0) Reply frame received for 5\nI0525 22:00:36.108477 2455 log.go:172] (0xc0009f66e0) Data frame received for 5\nI0525 22:00:36.108551 2455 log.go:172] (0xc0009420a0) (5) Data frame handling\nI0525 22:00:36.108578 2455 log.go:172] (0xc0009420a0) (5) Data frame sent\nI0525 22:00:36.108600 2455 log.go:172] (0xc0009f66e0) Data frame received for 5\nI0525 22:00:36.108617 2455 log.go:172] (0xc0009420a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0525 22:00:36.108655 2455 log.go:172] (0xc0009f66e0) Data frame received for 3\nI0525 22:00:36.108672 2455 log.go:172] (0xc0006d9ae0) (3) Data frame handling\nI0525 22:00:36.108691 2455 log.go:172] (0xc0006d9ae0) (3) Data frame sent\nI0525 22:00:36.108720 2455 log.go:172] (0xc0009f66e0) Data frame received for 3\nI0525 22:00:36.108740 2455 log.go:172] (0xc0006d9ae0) (3) Data frame handling\nI0525 22:00:36.110562 2455 log.go:172] (0xc0009f66e0) Data frame received for 1\nI0525 22:00:36.110602 2455 log.go:172] (0xc000942000) (1) Data frame handling\nI0525 22:00:36.110629 2455 log.go:172] (0xc000942000) (1) Data frame sent\nI0525 22:00:36.110650 2455 log.go:172] (0xc0009f66e0) (0xc000942000) Stream removed, broadcasting: 1\nI0525 22:00:36.110672 2455 log.go:172] (0xc0009f66e0) Go away received\nI0525 22:00:36.111142 2455 log.go:172] (0xc0009f66e0) (0xc000942000) Stream removed, broadcasting: 1\nI0525 22:00:36.111167 2455 log.go:172] (0xc0009f66e0) (0xc0006d9ae0) Stream removed, broadcasting: 3\nI0525 22:00:36.111179 2455 log.go:172] (0xc0009f66e0) (0xc0009420a0) Stream removed, broadcasting: 5\n" May 25 22:00:36.118: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 25 22:00:36.118: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 25 22:00:36.121: INFO: Found 1 stateful pods, waiting for 3 May 25 22:00:46.134: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 25 22:00:46.134: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 25 22:00:46.134: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod May 25 22:00:46.140: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6594 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 25 22:00:46.376: INFO: stderr: "I0525 22:00:46.271599 2477 log.go:172] (0xc000986000) (0xc0005d25a0) Create stream\nI0525 22:00:46.271652 2477 log.go:172] (0xc000986000) (0xc0005d25a0) Stream added, broadcasting: 1\nI0525 22:00:46.274401 2477 log.go:172] (0xc000986000) Reply frame received for 1\nI0525 22:00:46.274441 2477 log.go:172] (0xc000986000) (0xc000a20000) Create stream\nI0525 22:00:46.274452 2477 log.go:172] (0xc000986000) (0xc000a20000) Stream added, broadcasting: 3\nI0525 22:00:46.275351 2477 log.go:172] (0xc000986000) Reply frame received for 3\nI0525 22:00:46.275404 2477 log.go:172] (0xc000986000) (0xc00021d360) Create stream\nI0525 22:00:46.275420 2477 log.go:172] (0xc000986000) (0xc00021d360) Stream added, broadcasting: 5\nI0525 22:00:46.276147 2477 log.go:172] (0xc000986000) Reply frame received for 5\nI0525 22:00:46.368847 2477 log.go:172] (0xc000986000) Data frame received for 3\nI0525 22:00:46.368879 2477 log.go:172] (0xc000a20000) (3) Data frame handling\nI0525 22:00:46.368887 2477 log.go:172] (0xc000a20000) (3) Data frame sent\nI0525 22:00:46.368892 2477 log.go:172] (0xc000986000) Data frame received for 3\nI0525 22:00:46.368896 2477 log.go:172] (0xc000a20000) (3) Data frame handling\nI0525 22:00:46.368917 2477 log.go:172] (0xc000986000) Data frame received for 5\nI0525 22:00:46.368924 2477 log.go:172] (0xc00021d360) (5) Data frame handling\nI0525 22:00:46.368932 2477 log.go:172] (0xc00021d360) (5) Data frame sent\nI0525 22:00:46.368938 2477 log.go:172] (0xc000986000) Data frame received for 5\nI0525 22:00:46.368943 2477 log.go:172] (0xc00021d360) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0525 22:00:46.370612 2477 log.go:172] (0xc000986000) Data frame received for 1\nI0525 22:00:46.370706 2477 log.go:172] (0xc0005d25a0) (1) Data frame handling\nI0525 22:00:46.370740 2477 log.go:172] (0xc0005d25a0) (1) Data frame sent\nI0525 22:00:46.370838 2477 log.go:172] (0xc000986000) (0xc0005d25a0) Stream removed, broadcasting: 1\nI0525 22:00:46.370909 2477 log.go:172] (0xc000986000) Go away received\nI0525 22:00:46.371152 2477 log.go:172] (0xc000986000) (0xc0005d25a0) Stream removed, broadcasting: 1\nI0525 22:00:46.371167 2477 log.go:172] (0xc000986000) (0xc000a20000) Stream removed, broadcasting: 3\nI0525 22:00:46.371175 2477 log.go:172] (0xc000986000) (0xc00021d360) Stream removed, broadcasting: 5\n" May 25 22:00:46.376: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 25 22:00:46.376: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 25 22:00:46.376: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6594 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 25 22:00:46.618: INFO: stderr: "I0525 22:00:46.515234 2497 log.go:172] (0xc000446dc0) (0xc0006b7a40) Create stream\nI0525 22:00:46.515291 2497 log.go:172] (0xc000446dc0) (0xc0006b7a40) Stream added, broadcasting: 1\nI0525 22:00:46.517524 2497 log.go:172] (0xc000446dc0) Reply frame received for 1\nI0525 22:00:46.517551 2497 log.go:172] (0xc000446dc0) (0xc000966000) Create stream\nI0525 22:00:46.517558 2497 log.go:172] (0xc000446dc0) (0xc000966000) Stream added, broadcasting: 3\nI0525 22:00:46.518695 2497 log.go:172] (0xc000446dc0) Reply frame received for 3\nI0525 22:00:46.518759 2497 log.go:172] (0xc000446dc0) (0xc000290000) Create stream\nI0525 22:00:46.518777 2497 log.go:172] (0xc000446dc0) (0xc000290000) Stream added, broadcasting: 5\nI0525 22:00:46.519868 2497 log.go:172] (0xc000446dc0) Reply frame received for 5\nI0525 22:00:46.587308 2497 log.go:172] (0xc000446dc0) Data frame received for 5\nI0525 22:00:46.587339 2497 log.go:172] (0xc000290000) (5) Data frame handling\nI0525 22:00:46.587360 2497 log.go:172] (0xc000290000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0525 22:00:46.609614 2497 log.go:172] (0xc000446dc0) Data frame received for 3\nI0525 22:00:46.609640 2497 log.go:172] (0xc000966000) (3) Data frame handling\nI0525 22:00:46.609657 2497 log.go:172] (0xc000966000) (3) Data frame sent\nI0525 22:00:46.609908 2497 log.go:172] (0xc000446dc0) Data frame received for 5\nI0525 22:00:46.609964 2497 log.go:172] (0xc000290000) (5) Data frame handling\nI0525 22:00:46.610064 2497 log.go:172] (0xc000446dc0) Data frame received for 3\nI0525 22:00:46.610096 2497 log.go:172] (0xc000966000) (3) Data frame handling\nI0525 22:00:46.612132 2497 log.go:172] (0xc000446dc0) Data frame received for 1\nI0525 22:00:46.612148 2497 log.go:172] (0xc0006b7a40) (1) Data frame handling\nI0525 22:00:46.612170 2497 log.go:172] (0xc0006b7a40) (1) Data frame sent\nI0525 22:00:46.612183 2497 log.go:172] (0xc000446dc0) (0xc0006b7a40) Stream removed, broadcasting: 1\nI0525 22:00:46.612344 2497 log.go:172] (0xc000446dc0) Go away received\nI0525 22:00:46.612437 2497 log.go:172] (0xc000446dc0) (0xc0006b7a40) Stream removed, broadcasting: 1\nI0525 22:00:46.612451 2497 log.go:172] (0xc000446dc0) (0xc000966000) Stream removed, broadcasting: 3\nI0525 22:00:46.612460 2497 log.go:172] (0xc000446dc0) (0xc000290000) Stream removed, broadcasting: 5\n" May 25 22:00:46.619: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 25 22:00:46.619: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 25 22:00:46.619: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6594 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 25 22:00:46.895: INFO: stderr: "I0525 22:00:46.757980 2518 log.go:172] (0xc000a48790) (0xc0006dbe00) Create stream\nI0525 22:00:46.758054 2518 log.go:172] (0xc000a48790) (0xc0006dbe00) Stream added, broadcasting: 1\nI0525 22:00:46.760700 2518 log.go:172] (0xc000a48790) Reply frame received for 1\nI0525 22:00:46.760737 2518 log.go:172] (0xc000a48790) (0xc0006dbea0) Create stream\nI0525 22:00:46.760753 2518 log.go:172] (0xc000a48790) (0xc0006dbea0) Stream added, broadcasting: 3\nI0525 22:00:46.762210 2518 log.go:172] (0xc000a48790) Reply frame received for 3\nI0525 22:00:46.762262 2518 log.go:172] (0xc000a48790) (0xc0006826e0) Create stream\nI0525 22:00:46.762280 2518 log.go:172] (0xc000a48790) (0xc0006826e0) Stream added, broadcasting: 5\nI0525 22:00:46.763208 2518 log.go:172] (0xc000a48790) Reply frame received for 5\nI0525 22:00:46.829829 2518 log.go:172] (0xc000a48790) Data frame received for 5\nI0525 22:00:46.829853 2518 log.go:172] (0xc0006826e0) (5) Data frame handling\nI0525 22:00:46.829862 2518 log.go:172] (0xc0006826e0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0525 22:00:46.885533 2518 log.go:172] (0xc000a48790) Data frame received for 3\nI0525 22:00:46.885665 2518 log.go:172] (0xc0006dbea0) (3) Data frame handling\nI0525 22:00:46.885679 2518 log.go:172] (0xc0006dbea0) (3) Data frame sent\nI0525 22:00:46.885955 2518 log.go:172] (0xc000a48790) Data frame received for 3\nI0525 22:00:46.885993 2518 log.go:172] (0xc0006dbea0) (3) Data frame handling\nI0525 22:00:46.886672 2518 log.go:172] (0xc000a48790) Data frame received for 5\nI0525 22:00:46.886717 2518 log.go:172] (0xc0006826e0) (5) Data frame handling\nI0525 22:00:46.888498 2518 log.go:172] (0xc000a48790) Data frame received for 1\nI0525 22:00:46.888517 2518 log.go:172] (0xc0006dbe00) (1) Data frame handling\nI0525 22:00:46.888527 2518 log.go:172] (0xc0006dbe00) (1) Data frame sent\nI0525 22:00:46.888549 2518 log.go:172] (0xc000a48790) (0xc0006dbe00) Stream removed, broadcasting: 1\nI0525 22:00:46.888563 2518 log.go:172] (0xc000a48790) Go away received\nI0525 22:00:46.889347 2518 log.go:172] (0xc000a48790) (0xc0006dbe00) Stream removed, broadcasting: 1\nI0525 22:00:46.889399 2518 log.go:172] (0xc000a48790) (0xc0006dbea0) Stream removed, broadcasting: 3\nI0525 22:00:46.889419 2518 log.go:172] (0xc000a48790) (0xc0006826e0) Stream removed, broadcasting: 5\n" May 25 22:00:46.895: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 25 22:00:46.895: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 25 22:00:46.895: INFO: Waiting for statefulset status.replicas updated to 0 May 25 22:00:46.900: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 May 25 22:00:56.907: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 25 22:00:56.907: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 25 22:00:56.907: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 25 22:00:56.920: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999543s May 25 22:00:57.925: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.993051216s May 25 22:00:58.930: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.988171161s May 25 22:00:59.936: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.983364987s May 25 22:01:00.941: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.97763258s May 25 22:01:01.955: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.972275544s May 25 22:01:02.959: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.958404436s May 25 22:01:03.979: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.953970378s May 25 22:01:04.984: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.934518184s May 25 22:01:05.989: INFO: Verifying statefulset ss doesn't scale past 3 for another 929.764872ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-6594 May 25 22:01:06.994: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6594 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 25 22:01:07.234: INFO: stderr: "I0525 22:01:07.128731 2539 log.go:172] (0xc0003c0d10) (0xc00051dc20) Create stream\nI0525 22:01:07.128801 2539 log.go:172] (0xc0003c0d10) (0xc00051dc20) Stream added, broadcasting: 1\nI0525 22:01:07.132026 2539 log.go:172] (0xc0003c0d10) Reply frame received for 1\nI0525 22:01:07.132071 2539 log.go:172] (0xc0003c0d10) (0xc0007de000) Create stream\nI0525 22:01:07.132092 2539 log.go:172] (0xc0003c0d10) (0xc0007de000) Stream added, broadcasting: 3\nI0525 22:01:07.133377 2539 log.go:172] (0xc0003c0d10) Reply frame received for 3\nI0525 22:01:07.133417 2539 log.go:172] (0xc0003c0d10) (0xc00051dcc0) Create stream\nI0525 22:01:07.133429 2539 log.go:172] (0xc0003c0d10) (0xc00051dcc0) Stream added, broadcasting: 5\nI0525 22:01:07.134409 2539 log.go:172] (0xc0003c0d10) Reply frame received for 5\nI0525 22:01:07.227416 2539 log.go:172] (0xc0003c0d10) Data frame received for 5\nI0525 22:01:07.227450 2539 log.go:172] (0xc0003c0d10) Data frame received for 3\nI0525 22:01:07.227468 2539 log.go:172] (0xc0007de000) (3) Data frame handling\nI0525 22:01:07.227479 2539 log.go:172] (0xc0007de000) (3) Data frame sent\nI0525 22:01:07.227483 2539 log.go:172] (0xc0003c0d10) Data frame received for 3\nI0525 22:01:07.227488 2539 log.go:172] (0xc0007de000) (3) Data frame handling\nI0525 22:01:07.227510 2539 log.go:172] (0xc00051dcc0) (5) Data frame handling\nI0525 22:01:07.227516 2539 log.go:172] (0xc00051dcc0) (5) Data frame sent\nI0525 22:01:07.227525 2539 log.go:172] (0xc0003c0d10) Data frame received for 5\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0525 22:01:07.227539 2539 log.go:172] (0xc00051dcc0) (5) Data frame handling\nI0525 22:01:07.228561 2539 log.go:172] (0xc0003c0d10) Data frame received for 1\nI0525 22:01:07.228577 2539 log.go:172] (0xc00051dc20) (1) Data frame handling\nI0525 22:01:07.228583 2539 log.go:172] (0xc00051dc20) (1) Data frame sent\nI0525 22:01:07.228594 2539 log.go:172] (0xc0003c0d10) (0xc00051dc20) Stream removed, broadcasting: 1\nI0525 22:01:07.228675 2539 log.go:172] (0xc0003c0d10) Go away received\nI0525 22:01:07.228926 2539 log.go:172] (0xc0003c0d10) (0xc00051dc20) Stream removed, broadcasting: 1\nI0525 22:01:07.228941 2539 log.go:172] (0xc0003c0d10) (0xc0007de000) Stream removed, broadcasting: 3\nI0525 22:01:07.228949 2539 log.go:172] (0xc0003c0d10) (0xc00051dcc0) Stream removed, broadcasting: 5\n" May 25 22:01:07.234: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 25 22:01:07.234: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 25 22:01:07.234: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6594 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 25 22:01:07.541: INFO: stderr: "I0525 22:01:07.358047 2561 log.go:172] (0xc000b38b00) (0xc0000901e0) Create stream\nI0525 22:01:07.358122 2561 log.go:172] (0xc000b38b00) (0xc0000901e0) Stream added, broadcasting: 1\nI0525 22:01:07.360553 2561 log.go:172] (0xc000b38b00) Reply frame received for 1\nI0525 22:01:07.360597 2561 log.go:172] (0xc000b38b00) (0xc000090320) Create stream\nI0525 22:01:07.360625 2561 log.go:172] (0xc000b38b00) (0xc000090320) Stream added, broadcasting: 3\nI0525 22:01:07.361914 2561 log.go:172] (0xc000b38b00) Reply frame received for 3\nI0525 22:01:07.361964 2561 log.go:172] (0xc000b38b00) (0xc000693c20) Create stream\nI0525 22:01:07.361995 2561 log.go:172] (0xc000b38b00) (0xc000693c20) Stream added, broadcasting: 5\nI0525 22:01:07.362903 2561 log.go:172] (0xc000b38b00) Reply frame received for 5\nI0525 22:01:07.534521 2561 log.go:172] (0xc000b38b00) Data frame received for 3\nI0525 22:01:07.534564 2561 log.go:172] (0xc000090320) (3) Data frame handling\nI0525 22:01:07.534583 2561 log.go:172] (0xc000090320) (3) Data frame sent\nI0525 22:01:07.534595 2561 log.go:172] (0xc000b38b00) Data frame received for 3\nI0525 22:01:07.534605 2561 log.go:172] (0xc000090320) (3) Data frame handling\nI0525 22:01:07.534662 2561 log.go:172] (0xc000b38b00) Data frame received for 5\nI0525 22:01:07.534706 2561 log.go:172] (0xc000693c20) (5) Data frame handling\nI0525 22:01:07.534728 2561 log.go:172] (0xc000693c20) (5) Data frame sent\nI0525 22:01:07.534754 2561 log.go:172] (0xc000b38b00) Data frame received for 5\nI0525 22:01:07.534768 2561 log.go:172] (0xc000693c20) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0525 22:01:07.536038 2561 log.go:172] (0xc000b38b00) Data frame received for 1\nI0525 22:01:07.536063 2561 log.go:172] (0xc0000901e0) (1) Data frame handling\nI0525 22:01:07.536084 2561 log.go:172] (0xc0000901e0) (1) Data frame sent\nI0525 22:01:07.536110 2561 log.go:172] (0xc000b38b00) (0xc0000901e0) Stream removed, broadcasting: 1\nI0525 22:01:07.536151 2561 log.go:172] (0xc000b38b00) Go away received\nI0525 22:01:07.536432 2561 log.go:172] (0xc000b38b00) (0xc0000901e0) Stream removed, broadcasting: 1\nI0525 22:01:07.536446 2561 log.go:172] (0xc000b38b00) (0xc000090320) Stream removed, broadcasting: 3\nI0525 22:01:07.536456 2561 log.go:172] (0xc000b38b00) (0xc000693c20) Stream removed, broadcasting: 5\n" May 25 22:01:07.542: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 25 22:01:07.542: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 25 22:01:07.542: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6594 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 25 22:01:07.741: INFO: stderr: "I0525 22:01:07.675876 2584 log.go:172] (0xc0009e0790) (0xc0009bc000) Create stream\nI0525 22:01:07.675936 2584 log.go:172] (0xc0009e0790) (0xc0009bc000) Stream added, broadcasting: 1\nI0525 22:01:07.678252 2584 log.go:172] (0xc0009e0790) Reply frame received for 1\nI0525 22:01:07.678305 2584 log.go:172] (0xc0009e0790) (0xc0009bc0a0) Create stream\nI0525 22:01:07.678325 2584 log.go:172] (0xc0009e0790) (0xc0009bc0a0) Stream added, broadcasting: 3\nI0525 22:01:07.679706 2584 log.go:172] (0xc0009e0790) Reply frame received for 3\nI0525 22:01:07.679751 2584 log.go:172] (0xc0009e0790) (0xc000681cc0) Create stream\nI0525 22:01:07.679775 2584 log.go:172] (0xc0009e0790) (0xc000681cc0) Stream added, broadcasting: 5\nI0525 22:01:07.680856 2584 log.go:172] (0xc0009e0790) Reply frame received for 5\nI0525 22:01:07.734245 2584 log.go:172] (0xc0009e0790) Data frame received for 3\nI0525 22:01:07.734302 2584 log.go:172] (0xc0009bc0a0) (3) Data frame handling\nI0525 22:01:07.734325 2584 log.go:172] (0xc0009bc0a0) (3) Data frame sent\nI0525 22:01:07.734357 2584 log.go:172] (0xc0009e0790) Data frame received for 5\nI0525 22:01:07.734374 2584 log.go:172] (0xc000681cc0) (5) Data frame handling\nI0525 22:01:07.734392 2584 log.go:172] (0xc000681cc0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0525 22:01:07.734547 2584 log.go:172] (0xc0009e0790) Data frame received for 5\nI0525 22:01:07.734580 2584 log.go:172] (0xc000681cc0) (5) Data frame handling\nI0525 22:01:07.734709 2584 log.go:172] (0xc0009e0790) Data frame received for 3\nI0525 22:01:07.734725 2584 log.go:172] (0xc0009bc0a0) (3) Data frame handling\nI0525 22:01:07.736126 2584 log.go:172] (0xc0009e0790) Data frame received for 1\nI0525 22:01:07.736155 2584 log.go:172] (0xc0009bc000) (1) Data frame handling\nI0525 22:01:07.736170 2584 log.go:172] (0xc0009bc000) (1) Data frame sent\nI0525 22:01:07.736188 2584 log.go:172] (0xc0009e0790) (0xc0009bc000) Stream removed, broadcasting: 1\nI0525 22:01:07.736218 2584 log.go:172] (0xc0009e0790) Go away received\nI0525 22:01:07.736481 2584 log.go:172] (0xc0009e0790) (0xc0009bc000) Stream removed, broadcasting: 1\nI0525 22:01:07.736500 2584 log.go:172] (0xc0009e0790) (0xc0009bc0a0) Stream removed, broadcasting: 3\nI0525 22:01:07.736513 2584 log.go:172] (0xc0009e0790) (0xc000681cc0) Stream removed, broadcasting: 5\n" May 25 22:01:07.741: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 25 22:01:07.741: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 25 22:01:07.741: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 May 25 22:01:37.760: INFO: Deleting all statefulset in ns statefulset-6594 May 25 22:01:37.762: INFO: Scaling statefulset ss to 0 May 25 22:01:37.770: INFO: Waiting for statefulset status.replicas updated to 0 May 25 22:01:37.773: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 22:01:37.784: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-6594" for this suite. • [SLOW TEST:92.344 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":278,"completed":161,"skipped":2465,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 22:01:37.815: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 25 22:01:37.867: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 22:01:41.930: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4588" for this suite. •{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":278,"completed":162,"skipped":2475,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 22:01:41.939: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 22:01:55.183: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4026" for this suite. • [SLOW TEST:13.255 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":278,"completed":163,"skipped":2508,"failed":0} SSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 22:01:55.194: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1357 STEP: creating an pod May 25 22:01:55.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run logs-generator --generator=run-pod/v1 --image=gcr.io/kubernetes-e2e-test-images/agnhost:2.8 --namespace=kubectl-3629 -- logs-generator --log-lines-total 100 --run-duration 20s' May 25 22:01:55.360: INFO: stderr: "" May 25 22:01:55.360: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Waiting for log generator to start. May 25 22:01:55.360: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] May 25 22:01:55.360: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-3629" to be "running and ready, or succeeded" May 25 22:01:55.374: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 14.376836ms May 25 22:01:57.379: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018584964s May 25 22:01:59.383: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 4.023116022s May 25 22:01:59.383: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" May 25 22:01:59.383: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings May 25 22:01:59.383: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-3629' May 25 22:01:59.502: INFO: stderr: "" May 25 22:01:59.502: INFO: stdout: "I0525 22:01:58.147854 1 logs_generator.go:76] 0 PUT /api/v1/namespaces/kube-system/pods/npmj 229\nI0525 22:01:58.348014 1 logs_generator.go:76] 1 PUT /api/v1/namespaces/ns/pods/rs9q 457\nI0525 22:01:58.548144 1 logs_generator.go:76] 2 POST /api/v1/namespaces/default/pods/6hcc 500\nI0525 22:01:58.748045 1 logs_generator.go:76] 3 GET /api/v1/namespaces/kube-system/pods/j6js 507\nI0525 22:01:58.948086 1 logs_generator.go:76] 4 GET /api/v1/namespaces/default/pods/mhgg 321\nI0525 22:01:59.148086 1 logs_generator.go:76] 5 GET /api/v1/namespaces/kube-system/pods/zwh 307\nI0525 22:01:59.348056 1 logs_generator.go:76] 6 GET /api/v1/namespaces/kube-system/pods/wld 392\n" STEP: limiting log lines May 25 22:01:59.502: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-3629 --tail=1' May 25 22:01:59.617: INFO: stderr: "" May 25 22:01:59.617: INFO: stdout: "I0525 22:01:59.548055 1 logs_generator.go:76] 7 PUT /api/v1/namespaces/default/pods/7wq7 376\n" May 25 22:01:59.617: INFO: got output "I0525 22:01:59.548055 1 logs_generator.go:76] 7 PUT /api/v1/namespaces/default/pods/7wq7 376\n" STEP: limiting log bytes May 25 22:01:59.617: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-3629 --limit-bytes=1' May 25 22:01:59.731: INFO: stderr: "" May 25 22:01:59.731: INFO: stdout: "I" May 25 22:01:59.731: INFO: got output "I" STEP: exposing timestamps May 25 22:01:59.731: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-3629 --tail=1 --timestamps' May 25 22:01:59.861: INFO: stderr: "" May 25 22:01:59.861: INFO: stdout: "2020-05-25T22:01:59.748193585Z I0525 22:01:59.748039 1 logs_generator.go:76] 8 POST /api/v1/namespaces/ns/pods/mjbm 223\n" May 25 22:01:59.861: INFO: got output "2020-05-25T22:01:59.748193585Z I0525 22:01:59.748039 1 logs_generator.go:76] 8 POST /api/v1/namespaces/ns/pods/mjbm 223\n" STEP: restricting to a time range May 25 22:02:02.361: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-3629 --since=1s' May 25 22:02:02.472: INFO: stderr: "" May 25 22:02:02.472: INFO: stdout: "I0525 22:02:01.548102 1 logs_generator.go:76] 17 PUT /api/v1/namespaces/default/pods/rsl 279\nI0525 22:02:01.748034 1 logs_generator.go:76] 18 GET /api/v1/namespaces/kube-system/pods/2pjm 284\nI0525 22:02:01.948074 1 logs_generator.go:76] 19 PUT /api/v1/namespaces/ns/pods/gx6 492\nI0525 22:02:02.148114 1 logs_generator.go:76] 20 PUT /api/v1/namespaces/ns/pods/tvfb 467\nI0525 22:02:02.347998 1 logs_generator.go:76] 21 GET /api/v1/namespaces/default/pods/jkts 425\n" May 25 22:02:02.472: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-3629 --since=24h' May 25 22:02:02.580: INFO: stderr: "" May 25 22:02:02.580: INFO: stdout: "I0525 22:01:58.147854 1 logs_generator.go:76] 0 PUT /api/v1/namespaces/kube-system/pods/npmj 229\nI0525 22:01:58.348014 1 logs_generator.go:76] 1 PUT /api/v1/namespaces/ns/pods/rs9q 457\nI0525 22:01:58.548144 1 logs_generator.go:76] 2 POST /api/v1/namespaces/default/pods/6hcc 500\nI0525 22:01:58.748045 1 logs_generator.go:76] 3 GET /api/v1/namespaces/kube-system/pods/j6js 507\nI0525 22:01:58.948086 1 logs_generator.go:76] 4 GET /api/v1/namespaces/default/pods/mhgg 321\nI0525 22:01:59.148086 1 logs_generator.go:76] 5 GET /api/v1/namespaces/kube-system/pods/zwh 307\nI0525 22:01:59.348056 1 logs_generator.go:76] 6 GET /api/v1/namespaces/kube-system/pods/wld 392\nI0525 22:01:59.548055 1 logs_generator.go:76] 7 PUT /api/v1/namespaces/default/pods/7wq7 376\nI0525 22:01:59.748039 1 logs_generator.go:76] 8 POST /api/v1/namespaces/ns/pods/mjbm 223\nI0525 22:01:59.948087 1 logs_generator.go:76] 9 POST /api/v1/namespaces/kube-system/pods/c7bq 399\nI0525 22:02:00.147999 1 logs_generator.go:76] 10 POST /api/v1/namespaces/ns/pods/g7wc 314\nI0525 22:02:00.348010 1 logs_generator.go:76] 11 PUT /api/v1/namespaces/kube-system/pods/9lp 465\nI0525 22:02:00.548020 1 logs_generator.go:76] 12 GET /api/v1/namespaces/kube-system/pods/lvpv 292\nI0525 22:02:00.748060 1 logs_generator.go:76] 13 POST /api/v1/namespaces/default/pods/d9p 530\nI0525 22:02:00.948059 1 logs_generator.go:76] 14 POST /api/v1/namespaces/kube-system/pods/zs4 598\nI0525 22:02:01.148063 1 logs_generator.go:76] 15 POST /api/v1/namespaces/default/pods/8cw 214\nI0525 22:02:01.348113 1 logs_generator.go:76] 16 PUT /api/v1/namespaces/ns/pods/2fv 447\nI0525 22:02:01.548102 1 logs_generator.go:76] 17 PUT /api/v1/namespaces/default/pods/rsl 279\nI0525 22:02:01.748034 1 logs_generator.go:76] 18 GET /api/v1/namespaces/kube-system/pods/2pjm 284\nI0525 22:02:01.948074 1 logs_generator.go:76] 19 PUT /api/v1/namespaces/ns/pods/gx6 492\nI0525 22:02:02.148114 1 logs_generator.go:76] 20 PUT /api/v1/namespaces/ns/pods/tvfb 467\nI0525 22:02:02.347998 1 logs_generator.go:76] 21 GET /api/v1/namespaces/default/pods/jkts 425\nI0525 22:02:02.547997 1 logs_generator.go:76] 22 POST /api/v1/namespaces/ns/pods/42mt 493\n" [AfterEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1363 May 25 22:02:02.580: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-3629' May 25 22:02:09.502: INFO: stderr: "" May 25 22:02:09.502: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 22:02:09.502: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3629" for this suite. • [SLOW TEST:14.324 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1353 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":278,"completed":164,"skipped":2517,"failed":0} SSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 22:02:09.519: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting the proxy server May 25 22:02:09.629: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 22:02:09.713: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8086" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":278,"completed":165,"skipped":2523,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 22:02:09.722: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service nodeport-service with the type=NodePort in namespace services-3435 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-3435 STEP: creating replication controller externalsvc in namespace services-3435 I0525 22:02:10.018151 6 runners.go:189] Created replication controller with name: externalsvc, namespace: services-3435, replica count: 2 I0525 22:02:13.068632 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0525 22:02:16.068819 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName May 25 22:02:16.138: INFO: Creating new exec pod May 25 22:02:20.153: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3435 execpodjr7qr -- /bin/sh -x -c nslookup nodeport-service' May 25 22:02:20.509: INFO: stderr: "I0525 22:02:20.287267 2791 log.go:172] (0xc00021af20) (0xc0006a3a40) Create stream\nI0525 22:02:20.287334 2791 log.go:172] (0xc00021af20) (0xc0006a3a40) Stream added, broadcasting: 1\nI0525 22:02:20.289866 2791 log.go:172] (0xc00021af20) Reply frame received for 1\nI0525 22:02:20.289906 2791 log.go:172] (0xc00021af20) (0xc000146000) Create stream\nI0525 22:02:20.289919 2791 log.go:172] (0xc00021af20) (0xc000146000) Stream added, broadcasting: 3\nI0525 22:02:20.290856 2791 log.go:172] (0xc00021af20) Reply frame received for 3\nI0525 22:02:20.290878 2791 log.go:172] (0xc00021af20) (0xc0001460a0) Create stream\nI0525 22:02:20.290884 2791 log.go:172] (0xc00021af20) (0xc0001460a0) Stream added, broadcasting: 5\nI0525 22:02:20.291724 2791 log.go:172] (0xc00021af20) Reply frame received for 5\nI0525 22:02:20.366423 2791 log.go:172] (0xc00021af20) Data frame received for 5\nI0525 22:02:20.366472 2791 log.go:172] (0xc0001460a0) (5) Data frame handling\nI0525 22:02:20.366505 2791 log.go:172] (0xc0001460a0) (5) Data frame sent\n+ nslookup nodeport-service\nI0525 22:02:20.499526 2791 log.go:172] (0xc00021af20) Data frame received for 3\nI0525 22:02:20.499552 2791 log.go:172] (0xc000146000) (3) Data frame handling\nI0525 22:02:20.499561 2791 log.go:172] (0xc000146000) (3) Data frame sent\nI0525 22:02:20.500246 2791 log.go:172] (0xc00021af20) Data frame received for 3\nI0525 22:02:20.500264 2791 log.go:172] (0xc000146000) (3) Data frame handling\nI0525 22:02:20.500277 2791 log.go:172] (0xc000146000) (3) Data frame sent\nI0525 22:02:20.500844 2791 log.go:172] (0xc00021af20) Data frame received for 3\nI0525 22:02:20.500864 2791 log.go:172] (0xc000146000) (3) Data frame handling\nI0525 22:02:20.501108 2791 log.go:172] (0xc00021af20) Data frame received for 5\nI0525 22:02:20.501388 2791 log.go:172] (0xc0001460a0) (5) Data frame handling\nI0525 22:02:20.502630 2791 log.go:172] (0xc00021af20) Data frame received for 1\nI0525 22:02:20.502653 2791 log.go:172] (0xc0006a3a40) (1) Data frame handling\nI0525 22:02:20.502664 2791 log.go:172] (0xc0006a3a40) (1) Data frame sent\nI0525 22:02:20.502676 2791 log.go:172] (0xc00021af20) (0xc0006a3a40) Stream removed, broadcasting: 1\nI0525 22:02:20.502696 2791 log.go:172] (0xc00021af20) Go away received\nI0525 22:02:20.502975 2791 log.go:172] (0xc00021af20) (0xc0006a3a40) Stream removed, broadcasting: 1\nI0525 22:02:20.502990 2791 log.go:172] (0xc00021af20) (0xc000146000) Stream removed, broadcasting: 3\nI0525 22:02:20.502997 2791 log.go:172] (0xc00021af20) (0xc0001460a0) Stream removed, broadcasting: 5\n" May 25 22:02:20.509: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-3435.svc.cluster.local\tcanonical name = externalsvc.services-3435.svc.cluster.local.\nName:\texternalsvc.services-3435.svc.cluster.local\nAddress: 10.111.83.149\n\n" STEP: deleting ReplicationController externalsvc in namespace services-3435, will wait for the garbage collector to delete the pods May 25 22:02:20.582: INFO: Deleting ReplicationController externalsvc took: 18.317259ms May 25 22:02:20.882: INFO: Terminating ReplicationController externalsvc pods took: 300.27259ms May 25 22:02:26.121: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 22:02:26.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3435" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:16.434 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":278,"completed":166,"skipped":2571,"failed":0} [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 22:02:26.156: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-4e56d9fe-333f-4da7-9eca-06ffcffa196a STEP: Creating a pod to test consume secrets May 25 22:02:26.216: INFO: Waiting up to 5m0s for pod "pod-secrets-1a7d72ab-f8e4-401a-9f53-3d81ebbf4727" in namespace "secrets-9422" to be "success or failure" May 25 22:02:26.248: INFO: Pod "pod-secrets-1a7d72ab-f8e4-401a-9f53-3d81ebbf4727": Phase="Pending", Reason="", readiness=false. Elapsed: 32.435303ms May 25 22:02:28.405: INFO: Pod "pod-secrets-1a7d72ab-f8e4-401a-9f53-3d81ebbf4727": Phase="Pending", Reason="", readiness=false. Elapsed: 2.189525046s May 25 22:02:30.410: INFO: Pod "pod-secrets-1a7d72ab-f8e4-401a-9f53-3d81ebbf4727": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.194329472s STEP: Saw pod success May 25 22:02:30.410: INFO: Pod "pod-secrets-1a7d72ab-f8e4-401a-9f53-3d81ebbf4727" satisfied condition "success or failure" May 25 22:02:30.414: INFO: Trying to get logs from node jerma-worker pod pod-secrets-1a7d72ab-f8e4-401a-9f53-3d81ebbf4727 container secret-volume-test: STEP: delete the pod May 25 22:02:30.433: INFO: Waiting for pod pod-secrets-1a7d72ab-f8e4-401a-9f53-3d81ebbf4727 to disappear May 25 22:02:30.443: INFO: Pod pod-secrets-1a7d72ab-f8e4-401a-9f53-3d81ebbf4727 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 22:02:30.443: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9422" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":167,"skipped":2571,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 22:02:30.450: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override command May 25 22:02:30.545: INFO: Waiting up to 5m0s for pod "client-containers-18530849-2ab5-4a64-b247-a8d1c258c5da" in namespace "containers-1169" to be "success or failure" May 25 22:02:30.563: INFO: Pod "client-containers-18530849-2ab5-4a64-b247-a8d1c258c5da": Phase="Pending", Reason="", readiness=false. Elapsed: 18.104394ms May 25 22:02:32.572: INFO: Pod "client-containers-18530849-2ab5-4a64-b247-a8d1c258c5da": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027477447s May 25 22:02:34.576: INFO: Pod "client-containers-18530849-2ab5-4a64-b247-a8d1c258c5da": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030991257s STEP: Saw pod success May 25 22:02:34.576: INFO: Pod "client-containers-18530849-2ab5-4a64-b247-a8d1c258c5da" satisfied condition "success or failure" May 25 22:02:34.578: INFO: Trying to get logs from node jerma-worker2 pod client-containers-18530849-2ab5-4a64-b247-a8d1c258c5da container test-container: STEP: delete the pod May 25 22:02:34.594: INFO: Waiting for pod client-containers-18530849-2ab5-4a64-b247-a8d1c258c5da to disappear May 25 22:02:34.611: INFO: Pod client-containers-18530849-2ab5-4a64-b247-a8d1c258c5da no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 22:02:34.611: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-1169" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":278,"completed":168,"skipped":2615,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 22:02:34.645: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:178 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 22:02:34.713: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7144" for this suite. •{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":278,"completed":169,"skipped":2662,"failed":0} SSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 22:02:34.726: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating Agnhost RC May 25 22:02:34.847: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3811' May 25 22:02:35.124: INFO: stderr: "" May 25 22:02:35.124: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. May 25 22:02:36.128: INFO: Selector matched 1 pods for map[app:agnhost] May 25 22:02:36.128: INFO: Found 0 / 1 May 25 22:02:37.416: INFO: Selector matched 1 pods for map[app:agnhost] May 25 22:02:37.416: INFO: Found 0 / 1 May 25 22:02:38.196: INFO: Selector matched 1 pods for map[app:agnhost] May 25 22:02:38.196: INFO: Found 0 / 1 May 25 22:02:39.129: INFO: Selector matched 1 pods for map[app:agnhost] May 25 22:02:39.129: INFO: Found 1 / 1 May 25 22:02:39.129: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods May 25 22:02:39.132: INFO: Selector matched 1 pods for map[app:agnhost] May 25 22:02:39.132: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 25 22:02:39.133: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod agnhost-master-x5mmv --namespace=kubectl-3811 -p {"metadata":{"annotations":{"x":"y"}}}' May 25 22:02:39.240: INFO: stderr: "" May 25 22:02:39.240: INFO: stdout: "pod/agnhost-master-x5mmv patched\n" STEP: checking annotations May 25 22:02:39.262: INFO: Selector matched 1 pods for map[app:agnhost] May 25 22:02:39.262: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 22:02:39.262: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3811" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":278,"completed":170,"skipped":2668,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 22:02:39.269: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:46 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: setting up selector STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes May 25 22:02:43.401: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0' STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice May 25 22:03:03.484: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 22:03:03.487: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6681" for this suite. • [SLOW TEST:24.227 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance]","total":278,"completed":171,"skipped":2678,"failed":0} SSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 22:03:03.496: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update May 25 22:03:03.602: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-4158 /api/v1/namespaces/watch-4158/configmaps/e2e-watch-test-resource-version 02be6c3b-817d-428e-b6c2-e8d245adef4e 19129170 0 2020-05-25 22:03:03 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 25 22:03:03.602: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-4158 /api/v1/namespaces/watch-4158/configmaps/e2e-watch-test-resource-version 02be6c3b-817d-428e-b6c2-e8d245adef4e 19129171 0 2020-05-25 22:03:03 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 22:03:03.602: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-4158" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":278,"completed":172,"skipped":2682,"failed":0} SSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 22:03:03.616: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-configmap-k6mh STEP: Creating a pod to test atomic-volume-subpath May 25 22:03:03.705: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-k6mh" in namespace "subpath-3784" to be "success or failure" May 25 22:03:03.713: INFO: Pod "pod-subpath-test-configmap-k6mh": Phase="Pending", Reason="", readiness=false. Elapsed: 8.359887ms May 25 22:03:05.776: INFO: Pod "pod-subpath-test-configmap-k6mh": Phase="Pending", Reason="", readiness=false. Elapsed: 2.071399176s May 25 22:03:07.781: INFO: Pod "pod-subpath-test-configmap-k6mh": Phase="Running", Reason="", readiness=true. Elapsed: 4.075461308s May 25 22:03:09.785: INFO: Pod "pod-subpath-test-configmap-k6mh": Phase="Running", Reason="", readiness=true. Elapsed: 6.080039288s May 25 22:03:11.790: INFO: Pod "pod-subpath-test-configmap-k6mh": Phase="Running", Reason="", readiness=true. Elapsed: 8.084771772s May 25 22:03:13.794: INFO: Pod "pod-subpath-test-configmap-k6mh": Phase="Running", Reason="", readiness=true. Elapsed: 10.088981269s May 25 22:03:15.799: INFO: Pod "pod-subpath-test-configmap-k6mh": Phase="Running", Reason="", readiness=true. Elapsed: 12.093875374s May 25 22:03:17.803: INFO: Pod "pod-subpath-test-configmap-k6mh": Phase="Running", Reason="", readiness=true. Elapsed: 14.098162992s May 25 22:03:19.842: INFO: Pod "pod-subpath-test-configmap-k6mh": Phase="Running", Reason="", readiness=true. Elapsed: 16.137320469s May 25 22:03:21.846: INFO: Pod "pod-subpath-test-configmap-k6mh": Phase="Running", Reason="", readiness=true. Elapsed: 18.1412889s May 25 22:03:23.850: INFO: Pod "pod-subpath-test-configmap-k6mh": Phase="Running", Reason="", readiness=true. Elapsed: 20.144973713s May 25 22:03:25.854: INFO: Pod "pod-subpath-test-configmap-k6mh": Phase="Running", Reason="", readiness=true. Elapsed: 22.149201265s May 25 22:03:27.884: INFO: Pod "pod-subpath-test-configmap-k6mh": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.179162297s STEP: Saw pod success May 25 22:03:27.884: INFO: Pod "pod-subpath-test-configmap-k6mh" satisfied condition "success or failure" May 25 22:03:27.887: INFO: Trying to get logs from node jerma-worker2 pod pod-subpath-test-configmap-k6mh container test-container-subpath-configmap-k6mh: STEP: delete the pod May 25 22:03:27.906: INFO: Waiting for pod pod-subpath-test-configmap-k6mh to disappear May 25 22:03:27.910: INFO: Pod pod-subpath-test-configmap-k6mh no longer exists STEP: Deleting pod pod-subpath-test-configmap-k6mh May 25 22:03:27.911: INFO: Deleting pod "pod-subpath-test-configmap-k6mh" in namespace "subpath-3784" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 22:03:27.913: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-3784" for this suite. • [SLOW TEST:24.304 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":278,"completed":173,"skipped":2689,"failed":0} SSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 22:03:27.920: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-995 STEP: creating a selector STEP: Creating the service pods in kubernetes May 25 22:03:27.963: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 25 22:03:50.112: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.213:8080/dial?request=hostname&protocol=http&host=10.244.1.165&port=8080&tries=1'] Namespace:pod-network-test-995 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 25 22:03:50.112: INFO: >>> kubeConfig: /root/.kube/config I0525 22:03:50.142182 6 log.go:172] (0xc0016e1600) (0xc002328000) Create stream I0525 22:03:50.142222 6 log.go:172] (0xc0016e1600) (0xc002328000) Stream added, broadcasting: 1 I0525 22:03:50.143951 6 log.go:172] (0xc0016e1600) Reply frame received for 1 I0525 22:03:50.144000 6 log.go:172] (0xc0016e1600) (0xc00289c000) Create stream I0525 22:03:50.144013 6 log.go:172] (0xc0016e1600) (0xc00289c000) Stream added, broadcasting: 3 I0525 22:03:50.144722 6 log.go:172] (0xc0016e1600) Reply frame received for 3 I0525 22:03:50.144748 6 log.go:172] (0xc0016e1600) (0xc0016345a0) Create stream I0525 22:03:50.144761 6 log.go:172] (0xc0016e1600) (0xc0016345a0) Stream added, broadcasting: 5 I0525 22:03:50.145752 6 log.go:172] (0xc0016e1600) Reply frame received for 5 I0525 22:03:50.248147 6 log.go:172] (0xc0016e1600) Data frame received for 3 I0525 22:03:50.248194 6 log.go:172] (0xc00289c000) (3) Data frame handling I0525 22:03:50.248218 6 log.go:172] (0xc00289c000) (3) Data frame sent I0525 22:03:50.249435 6 log.go:172] (0xc0016e1600) Data frame received for 3 I0525 22:03:50.249466 6 log.go:172] (0xc00289c000) (3) Data frame handling I0525 22:03:50.249643 6 log.go:172] (0xc0016e1600) Data frame received for 5 I0525 22:03:50.249690 6 log.go:172] (0xc0016345a0) (5) Data frame handling I0525 22:03:50.251740 6 log.go:172] (0xc0016e1600) Data frame received for 1 I0525 22:03:50.251782 6 log.go:172] (0xc002328000) (1) Data frame handling I0525 22:03:50.251823 6 log.go:172] (0xc002328000) (1) Data frame sent I0525 22:03:50.251842 6 log.go:172] (0xc0016e1600) (0xc002328000) Stream removed, broadcasting: 1 I0525 22:03:50.251932 6 log.go:172] (0xc0016e1600) (0xc002328000) Stream removed, broadcasting: 1 I0525 22:03:50.251949 6 log.go:172] (0xc0016e1600) (0xc00289c000) Stream removed, broadcasting: 3 I0525 22:03:50.252013 6 log.go:172] (0xc0016e1600) Go away received I0525 22:03:50.252129 6 log.go:172] (0xc0016e1600) (0xc0016345a0) Stream removed, broadcasting: 5 May 25 22:03:50.252: INFO: Waiting for responses: map[] May 25 22:03:50.256: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.213:8080/dial?request=hostname&protocol=http&host=10.244.2.212&port=8080&tries=1'] Namespace:pod-network-test-995 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 25 22:03:50.256: INFO: >>> kubeConfig: /root/.kube/config I0525 22:03:50.288093 6 log.go:172] (0xc0016e1ce0) (0xc0023285a0) Create stream I0525 22:03:50.288126 6 log.go:172] (0xc0016e1ce0) (0xc0023285a0) Stream added, broadcasting: 1 I0525 22:03:50.290081 6 log.go:172] (0xc0016e1ce0) Reply frame received for 1 I0525 22:03:50.290133 6 log.go:172] (0xc0016e1ce0) (0xc0023286e0) Create stream I0525 22:03:50.290149 6 log.go:172] (0xc0016e1ce0) (0xc0023286e0) Stream added, broadcasting: 3 I0525 22:03:50.291295 6 log.go:172] (0xc0016e1ce0) Reply frame received for 3 I0525 22:03:50.291334 6 log.go:172] (0xc0016e1ce0) (0xc002328780) Create stream I0525 22:03:50.291349 6 log.go:172] (0xc0016e1ce0) (0xc002328780) Stream added, broadcasting: 5 I0525 22:03:50.292289 6 log.go:172] (0xc0016e1ce0) Reply frame received for 5 I0525 22:03:50.363361 6 log.go:172] (0xc0016e1ce0) Data frame received for 3 I0525 22:03:50.363392 6 log.go:172] (0xc0023286e0) (3) Data frame handling I0525 22:03:50.363409 6 log.go:172] (0xc0023286e0) (3) Data frame sent I0525 22:03:50.363889 6 log.go:172] (0xc0016e1ce0) Data frame received for 5 I0525 22:03:50.363914 6 log.go:172] (0xc002328780) (5) Data frame handling I0525 22:03:50.363932 6 log.go:172] (0xc0016e1ce0) Data frame received for 3 I0525 22:03:50.363943 6 log.go:172] (0xc0023286e0) (3) Data frame handling I0525 22:03:50.365547 6 log.go:172] (0xc0016e1ce0) Data frame received for 1 I0525 22:03:50.365577 6 log.go:172] (0xc0023285a0) (1) Data frame handling I0525 22:03:50.365605 6 log.go:172] (0xc0023285a0) (1) Data frame sent I0525 22:03:50.365622 6 log.go:172] (0xc0016e1ce0) (0xc0023285a0) Stream removed, broadcasting: 1 I0525 22:03:50.365641 6 log.go:172] (0xc0016e1ce0) Go away received I0525 22:03:50.365751 6 log.go:172] (0xc0016e1ce0) (0xc0023285a0) Stream removed, broadcasting: 1 I0525 22:03:50.365779 6 log.go:172] (0xc0016e1ce0) (0xc0023286e0) Stream removed, broadcasting: 3 I0525 22:03:50.365798 6 log.go:172] (0xc0016e1ce0) (0xc002328780) Stream removed, broadcasting: 5 May 25 22:03:50.365: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 22:03:50.365: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-995" for this suite. • [SLOW TEST:22.454 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":174,"skipped":2700,"failed":0} S ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 22:03:50.374: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook May 25 22:04:00.560: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 25 22:04:00.587: INFO: Pod pod-with-poststart-exec-hook still exists May 25 22:04:02.588: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 25 22:04:02.593: INFO: Pod pod-with-poststart-exec-hook still exists May 25 22:04:04.588: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 25 22:04:04.592: INFO: Pod pod-with-poststart-exec-hook still exists May 25 22:04:06.588: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 25 22:04:06.592: INFO: Pod pod-with-poststart-exec-hook still exists May 25 22:04:08.588: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 25 22:04:08.592: INFO: Pod pod-with-poststart-exec-hook still exists May 25 22:04:10.588: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 25 22:04:10.592: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 22:04:10.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-6369" for this suite. • [SLOW TEST:20.226 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":278,"completed":175,"skipped":2701,"failed":0} SS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 22:04:10.601: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 25 22:04:15.859: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 22:04:15.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-8356" for this suite. • [SLOW TEST:5.332 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131 should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":176,"skipped":2703,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 22:04:15.933: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-map-6e60ec5a-bd45-4f51-a0ce-fe8dc989f4df STEP: Creating a pod to test consume secrets May 25 22:04:16.005: INFO: Waiting up to 5m0s for pod "pod-secrets-4cc1952e-7a5d-4f95-98a9-bc95e28163ad" in namespace "secrets-7725" to be "success or failure" May 25 22:04:16.008: INFO: Pod "pod-secrets-4cc1952e-7a5d-4f95-98a9-bc95e28163ad": Phase="Pending", Reason="", readiness=false. Elapsed: 3.186727ms May 25 22:04:18.185: INFO: Pod "pod-secrets-4cc1952e-7a5d-4f95-98a9-bc95e28163ad": Phase="Pending", Reason="", readiness=false. Elapsed: 2.180391942s May 25 22:04:20.188: INFO: Pod "pod-secrets-4cc1952e-7a5d-4f95-98a9-bc95e28163ad": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.183875551s STEP: Saw pod success May 25 22:04:20.189: INFO: Pod "pod-secrets-4cc1952e-7a5d-4f95-98a9-bc95e28163ad" satisfied condition "success or failure" May 25 22:04:20.191: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-4cc1952e-7a5d-4f95-98a9-bc95e28163ad container secret-volume-test: STEP: delete the pod May 25 22:04:20.224: INFO: Waiting for pod pod-secrets-4cc1952e-7a5d-4f95-98a9-bc95e28163ad to disappear May 25 22:04:20.241: INFO: Pod pod-secrets-4cc1952e-7a5d-4f95-98a9-bc95e28163ad no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 22:04:20.241: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7725" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":177,"skipped":2713,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 22:04:20.250: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1790 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 25 22:04:20.540: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-4192' May 25 22:04:20.650: INFO: stderr: "" May 25 22:04:20.650: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created May 25 22:04:25.701: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-4192 -o json' May 25 22:04:25.796: INFO: stderr: "" May 25 22:04:25.796: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-05-25T22:04:20Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-4192\",\n \"resourceVersion\": \"19129602\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-4192/pods/e2e-test-httpd-pod\",\n \"uid\": \"e75153e9-8117-434d-8460-74eeb0638a87\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-g8tls\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"jerma-worker2\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-g8tls\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-g8tls\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-25T22:04:20Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-25T22:04:23Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-25T22:04:23Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-25T22:04:20Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://d20f3160e5d537a3e9c06543a84098e592eabe41cfab76a3c4b519f7c8e26fa6\",\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-05-25T22:04:23Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.8\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.2.216\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.2.216\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-05-25T22:04:20Z\"\n }\n}\n" STEP: replace the image in the pod May 25 22:04:25.797: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-4192' May 25 22:04:26.091: INFO: stderr: "" May 25 22:04:26.091: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29 [AfterEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1795 May 25 22:04:26.098: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-4192' May 25 22:04:39.499: INFO: stderr: "" May 25 22:04:39.499: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 22:04:39.499: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4192" for this suite. • [SLOW TEST:19.262 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1786 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":278,"completed":178,"skipped":2775,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 22:04:39.512: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 25 22:04:39.973: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 25 22:04:42.004: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726041080, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726041080, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726041080, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726041079, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 25 22:04:44.009: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726041080, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726041080, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726041080, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726041079, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 25 22:04:47.046: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 22:04:47.163: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-934" for this suite. STEP: Destroying namespace "webhook-934-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.739 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":278,"completed":179,"skipped":2784,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 22:04:47.252: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 25 22:04:47.285: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties May 25 22:04:49.218: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7950 create -f -' May 25 22:04:55.237: INFO: stderr: "" May 25 22:04:55.237: INFO: stdout: "e2e-test-crd-publish-openapi-305-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" May 25 22:04:55.238: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7950 delete e2e-test-crd-publish-openapi-305-crds test-foo' May 25 22:04:55.342: INFO: stderr: "" May 25 22:04:55.342: INFO: stdout: "e2e-test-crd-publish-openapi-305-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" May 25 22:04:55.342: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7950 apply -f -' May 25 22:04:56.824: INFO: stderr: "" May 25 22:04:56.824: INFO: stdout: "e2e-test-crd-publish-openapi-305-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" May 25 22:04:56.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7950 delete e2e-test-crd-publish-openapi-305-crds test-foo' May 25 22:04:56.930: INFO: stderr: "" May 25 22:04:56.930: INFO: stdout: "e2e-test-crd-publish-openapi-305-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema May 25 22:04:56.930: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7950 create -f -' May 25 22:04:57.694: INFO: rc: 1 May 25 22:04:57.694: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7950 apply -f -' May 25 22:04:57.953: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties May 25 22:04:57.953: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7950 create -f -' May 25 22:04:58.188: INFO: rc: 1 May 25 22:04:58.188: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7950 apply -f -' May 25 22:04:58.447: INFO: rc: 1 STEP: kubectl explain works to explain CR properties May 25 22:04:58.448: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-305-crds' May 25 22:04:59.200: INFO: stderr: "" May 25 22:04:59.200: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-305-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively May 25 22:04:59.200: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-305-crds.metadata' May 25 22:05:00.359: INFO: stderr: "" May 25 22:05:00.359: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-305-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC. Populated by the system.\n Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested. Populated by the system when a graceful deletion is\n requested. Read-only. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server. If this field is specified and the generated name exists, the\n server will NOT return a 409 - instead, it will either return 201 Created\n or 500 with Reason ServerTimeout indicating a unique name could not be\n found in the time allotted, and the client should retry (optionally after\n the time indicated in the Retry-After header). Applied only if Name is not\n specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty. Must\n be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources. Populated by the system.\n Read-only. Value must be treated as opaque by clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n release and the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations. Populated by the system. Read-only.\n More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" May 25 22:05:00.360: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-305-crds.spec' May 25 22:05:01.064: INFO: stderr: "" May 25 22:05:01.064: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-305-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" May 25 22:05:01.064: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-305-crds.spec.bars' May 25 22:05:02.196: INFO: stderr: "" May 25 22:05:02.196: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-305-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist May 25 22:05:02.196: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-305-crds.spec.bars2' May 25 22:05:02.464: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 22:05:05.342: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-7950" for this suite. • [SLOW TEST:18.097 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":278,"completed":180,"skipped":2793,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 22:05:05.349: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1489 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 25 22:05:05.475: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-7002' May 25 22:05:05.587: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 25 22:05:05.587: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n" STEP: verifying the pod controlled by e2e-test-httpd-deployment gets created [AfterEach] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1495 May 25 22:05:05.641: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-7002' May 25 22:05:05.741: INFO: stderr: "" May 25 22:05:05.742: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 22:05:05.742: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7002" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image [Conformance]","total":278,"completed":181,"skipped":2807,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 22:05:05.785: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 25 22:05:05.929: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9debdf59-c859-4189-a9cf-f68e4846e325" in namespace "projected-2462" to be "success or failure" May 25 22:05:05.932: INFO: Pod "downwardapi-volume-9debdf59-c859-4189-a9cf-f68e4846e325": Phase="Pending", Reason="", readiness=false. Elapsed: 2.727463ms May 25 22:05:07.937: INFO: Pod "downwardapi-volume-9debdf59-c859-4189-a9cf-f68e4846e325": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008507795s May 25 22:05:09.942: INFO: Pod "downwardapi-volume-9debdf59-c859-4189-a9cf-f68e4846e325": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012942283s STEP: Saw pod success May 25 22:05:09.942: INFO: Pod "downwardapi-volume-9debdf59-c859-4189-a9cf-f68e4846e325" satisfied condition "success or failure" May 25 22:05:09.945: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-9debdf59-c859-4189-a9cf-f68e4846e325 container client-container: STEP: delete the pod May 25 22:05:09.963: INFO: Waiting for pod downwardapi-volume-9debdf59-c859-4189-a9cf-f68e4846e325 to disappear May 25 22:05:10.014: INFO: Pod downwardapi-volume-9debdf59-c859-4189-a9cf-f68e4846e325 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 22:05:10.014: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2462" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":278,"completed":182,"skipped":2854,"failed":0} SSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 22:05:10.023: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 25 22:05:10.096: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a1ed72a3-79ea-46c7-943a-3a2e7b81e4b4" in namespace "projected-6833" to be "success or failure" May 25 22:05:10.099: INFO: Pod "downwardapi-volume-a1ed72a3-79ea-46c7-943a-3a2e7b81e4b4": Phase="Pending", Reason="", readiness=false. Elapsed: 3.278394ms May 25 22:05:12.104: INFO: Pod "downwardapi-volume-a1ed72a3-79ea-46c7-943a-3a2e7b81e4b4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008025215s May 25 22:05:14.108: INFO: Pod "downwardapi-volume-a1ed72a3-79ea-46c7-943a-3a2e7b81e4b4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012209527s STEP: Saw pod success May 25 22:05:14.108: INFO: Pod "downwardapi-volume-a1ed72a3-79ea-46c7-943a-3a2e7b81e4b4" satisfied condition "success or failure" May 25 22:05:14.111: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-a1ed72a3-79ea-46c7-943a-3a2e7b81e4b4 container client-container: STEP: delete the pod May 25 22:05:14.131: INFO: Waiting for pod downwardapi-volume-a1ed72a3-79ea-46c7-943a-3a2e7b81e4b4 to disappear May 25 22:05:14.135: INFO: Pod downwardapi-volume-a1ed72a3-79ea-46c7-943a-3a2e7b81e4b4 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 22:05:14.135: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6833" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":183,"skipped":2859,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 22:05:14.142: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:46 [It] should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 22:05:14.204: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-202" for this suite. •{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":278,"completed":184,"skipped":2872,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 22:05:14.211: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on tmpfs May 25 22:05:14.388: INFO: Waiting up to 5m0s for pod "pod-268b5042-a293-467d-8106-669c0f77ae62" in namespace "emptydir-26" to be "success or failure" May 25 22:05:14.425: INFO: Pod "pod-268b5042-a293-467d-8106-669c0f77ae62": Phase="Pending", Reason="", readiness=false. Elapsed: 36.186121ms May 25 22:05:16.429: INFO: Pod "pod-268b5042-a293-467d-8106-669c0f77ae62": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040574072s May 25 22:05:18.433: INFO: Pod "pod-268b5042-a293-467d-8106-669c0f77ae62": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.044453602s STEP: Saw pod success May 25 22:05:18.433: INFO: Pod "pod-268b5042-a293-467d-8106-669c0f77ae62" satisfied condition "success or failure" May 25 22:05:18.436: INFO: Trying to get logs from node jerma-worker pod pod-268b5042-a293-467d-8106-669c0f77ae62 container test-container: STEP: delete the pod May 25 22:05:18.508: INFO: Waiting for pod pod-268b5042-a293-467d-8106-669c0f77ae62 to disappear May 25 22:05:18.635: INFO: Pod pod-268b5042-a293-467d-8106-669c0f77ae62 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 22:05:18.635: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-26" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":185,"skipped":2882,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 22:05:18.651: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:324 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a replication controller May 25 22:05:18.861: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5044' May 25 22:05:19.197: INFO: stderr: "" May 25 22:05:19.197: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 25 22:05:19.197: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5044' May 25 22:05:19.344: INFO: stderr: "" May 25 22:05:19.344: INFO: stdout: "update-demo-nautilus-czc5k update-demo-nautilus-h59hv " May 25 22:05:19.344: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-czc5k -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5044' May 25 22:05:19.446: INFO: stderr: "" May 25 22:05:19.446: INFO: stdout: "" May 25 22:05:19.446: INFO: update-demo-nautilus-czc5k is created but not running May 25 22:05:24.446: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5044' May 25 22:05:24.553: INFO: stderr: "" May 25 22:05:24.553: INFO: stdout: "update-demo-nautilus-czc5k update-demo-nautilus-h59hv " May 25 22:05:24.553: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-czc5k -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5044' May 25 22:05:24.652: INFO: stderr: "" May 25 22:05:24.652: INFO: stdout: "true" May 25 22:05:24.652: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-czc5k -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5044' May 25 22:05:24.747: INFO: stderr: "" May 25 22:05:24.747: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 25 22:05:24.747: INFO: validating pod update-demo-nautilus-czc5k May 25 22:05:24.752: INFO: got data: { "image": "nautilus.jpg" } May 25 22:05:24.752: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 25 22:05:24.752: INFO: update-demo-nautilus-czc5k is verified up and running May 25 22:05:24.752: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-h59hv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5044' May 25 22:05:24.851: INFO: stderr: "" May 25 22:05:24.851: INFO: stdout: "true" May 25 22:05:24.851: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-h59hv -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5044' May 25 22:05:24.942: INFO: stderr: "" May 25 22:05:24.942: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 25 22:05:24.942: INFO: validating pod update-demo-nautilus-h59hv May 25 22:05:24.946: INFO: got data: { "image": "nautilus.jpg" } May 25 22:05:24.946: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 25 22:05:24.946: INFO: update-demo-nautilus-h59hv is verified up and running STEP: using delete to clean up resources May 25 22:05:24.946: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5044' May 25 22:05:25.053: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 25 22:05:25.053: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 25 22:05:25.053: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-5044' May 25 22:05:25.146: INFO: stderr: "No resources found in kubectl-5044 namespace.\n" May 25 22:05:25.146: INFO: stdout: "" May 25 22:05:25.146: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-5044 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 25 22:05:25.245: INFO: stderr: "" May 25 22:05:25.245: INFO: stdout: "update-demo-nautilus-czc5k\nupdate-demo-nautilus-h59hv\n" May 25 22:05:25.745: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-5044' May 25 22:05:25.858: INFO: stderr: "No resources found in kubectl-5044 namespace.\n" May 25 22:05:25.858: INFO: stdout: "" May 25 22:05:25.858: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-5044 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 25 22:05:25.952: INFO: stderr: "" May 25 22:05:25.952: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 22:05:25.952: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5044" for this suite. • [SLOW TEST:7.307 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:322 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":278,"completed":186,"skipped":2910,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 22:05:25.959: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation May 25 22:05:26.277: INFO: >>> kubeConfig: /root/.kube/config May 25 22:05:28.276: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 22:05:38.743: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2976" for this suite. • [SLOW TEST:12.818 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":278,"completed":187,"skipped":2924,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 22:05:38.777: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 22:05:43.577: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-6710" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":278,"completed":188,"skipped":2938,"failed":0} SSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 22:05:43.679: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars May 25 22:05:43.769: INFO: Waiting up to 5m0s for pod "downward-api-389d6f3a-eb45-4b46-9598-cd30f37e935f" in namespace "downward-api-9955" to be "success or failure" May 25 22:05:43.777: INFO: Pod "downward-api-389d6f3a-eb45-4b46-9598-cd30f37e935f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.037418ms May 25 22:05:45.783: INFO: Pod "downward-api-389d6f3a-eb45-4b46-9598-cd30f37e935f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013194704s May 25 22:05:47.787: INFO: Pod "downward-api-389d6f3a-eb45-4b46-9598-cd30f37e935f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017424581s STEP: Saw pod success May 25 22:05:47.787: INFO: Pod "downward-api-389d6f3a-eb45-4b46-9598-cd30f37e935f" satisfied condition "success or failure" May 25 22:05:47.790: INFO: Trying to get logs from node jerma-worker pod downward-api-389d6f3a-eb45-4b46-9598-cd30f37e935f container dapi-container: STEP: delete the pod May 25 22:05:47.808: INFO: Waiting for pod downward-api-389d6f3a-eb45-4b46-9598-cd30f37e935f to disappear May 25 22:05:47.837: INFO: Pod downward-api-389d6f3a-eb45-4b46-9598-cd30f37e935f no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 22:05:47.837: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9955" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":278,"completed":189,"skipped":2941,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 22:05:47.846: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-e623ad35-69bb-4080-b888-8831a95dbdbf STEP: Creating a pod to test consume configMaps May 25 22:05:47.940: INFO: Waiting up to 5m0s for pod "pod-configmaps-dfe16dcd-a885-4a51-8b47-c712b0a4314e" in namespace "configmap-8628" to be "success or failure" May 25 22:05:47.967: INFO: Pod "pod-configmaps-dfe16dcd-a885-4a51-8b47-c712b0a4314e": Phase="Pending", Reason="", readiness=false. Elapsed: 27.573802ms May 25 22:05:49.971: INFO: Pod "pod-configmaps-dfe16dcd-a885-4a51-8b47-c712b0a4314e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031485347s May 25 22:05:51.975: INFO: Pod "pod-configmaps-dfe16dcd-a885-4a51-8b47-c712b0a4314e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.035669801s STEP: Saw pod success May 25 22:05:51.975: INFO: Pod "pod-configmaps-dfe16dcd-a885-4a51-8b47-c712b0a4314e" satisfied condition "success or failure" May 25 22:05:51.979: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-dfe16dcd-a885-4a51-8b47-c712b0a4314e container configmap-volume-test: STEP: delete the pod May 25 22:05:52.038: INFO: Waiting for pod pod-configmaps-dfe16dcd-a885-4a51-8b47-c712b0a4314e to disappear May 25 22:05:52.047: INFO: Pod pod-configmaps-dfe16dcd-a885-4a51-8b47-c712b0a4314e no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 22:05:52.047: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8628" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":190,"skipped":2962,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 22:05:52.055: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-5242.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-5242.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5242.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-5242.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-5242.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5242.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 25 22:05:58.240: INFO: DNS probes using dns-5242/dns-test-653a3f0c-a77b-40f8-8991-b413e5aa7126 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 22:05:58.385: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5242" for this suite. • [SLOW TEST:6.395 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":278,"completed":191,"skipped":2984,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 22:05:58.451: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 22:06:15.004: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8909" for this suite. • [SLOW TEST:16.560 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":278,"completed":192,"skipped":3004,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 22:06:15.011: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 22:06:26.137: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-5811" for this suite. • [SLOW TEST:11.136 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":278,"completed":193,"skipped":3010,"failed":0} SSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 22:06:26.147: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 25 22:06:26.254: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c12c536d-6466-4ba5-8147-b230e1c3cafc" in namespace "downward-api-2582" to be "success or failure" May 25 22:06:26.257: INFO: Pod "downwardapi-volume-c12c536d-6466-4ba5-8147-b230e1c3cafc": Phase="Pending", Reason="", readiness=false. Elapsed: 3.248058ms May 25 22:06:28.366: INFO: Pod "downwardapi-volume-c12c536d-6466-4ba5-8147-b230e1c3cafc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.112635651s May 25 22:06:30.370: INFO: Pod "downwardapi-volume-c12c536d-6466-4ba5-8147-b230e1c3cafc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.116262123s STEP: Saw pod success May 25 22:06:30.370: INFO: Pod "downwardapi-volume-c12c536d-6466-4ba5-8147-b230e1c3cafc" satisfied condition "success or failure" May 25 22:06:30.372: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-c12c536d-6466-4ba5-8147-b230e1c3cafc container client-container: STEP: delete the pod May 25 22:06:30.503: INFO: Waiting for pod downwardapi-volume-c12c536d-6466-4ba5-8147-b230e1c3cafc to disappear May 25 22:06:30.515: INFO: Pod downwardapi-volume-c12c536d-6466-4ba5-8147-b230e1c3cafc no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 22:06:30.515: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2582" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":194,"skipped":3016,"failed":0} SSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 22:06:30.523: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 25 22:06:30.700: INFO: Pod name rollover-pod: Found 0 pods out of 1 May 25 22:06:35.719: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 25 22:06:35.720: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready May 25 22:06:37.724: INFO: Creating deployment "test-rollover-deployment" May 25 22:06:37.731: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations May 25 22:06:39.738: INFO: Check revision of new replica set for deployment "test-rollover-deployment" May 25 22:06:39.745: INFO: Ensure that both replica sets have 1 created replica May 25 22:06:39.750: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update May 25 22:06:39.755: INFO: Updating deployment test-rollover-deployment May 25 22:06:39.755: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller May 25 22:06:41.923: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 May 25 22:06:41.929: INFO: Make sure deployment "test-rollover-deployment" is complete May 25 22:06:41.935: INFO: all replica sets need to contain the pod-template-hash label May 25 22:06:41.935: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726041197, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726041197, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726041199, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726041197, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} May 25 22:06:43.943: INFO: all replica sets need to contain the pod-template-hash label May 25 22:06:43.943: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726041197, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726041197, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726041203, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726041197, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} May 25 22:06:45.944: INFO: all replica sets need to contain the pod-template-hash label May 25 22:06:45.944: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726041197, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726041197, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726041203, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726041197, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} May 25 22:06:47.943: INFO: all replica sets need to contain the pod-template-hash label May 25 22:06:47.943: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726041197, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726041197, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726041203, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726041197, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} May 25 22:06:49.944: INFO: all replica sets need to contain the pod-template-hash label May 25 22:06:49.944: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726041197, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726041197, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726041203, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726041197, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} May 25 22:06:51.942: INFO: all replica sets need to contain the pod-template-hash label May 25 22:06:51.942: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726041197, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726041197, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726041203, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726041197, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} May 25 22:06:54.052: INFO: May 25 22:06:54.052: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 May 25 22:06:54.108: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-5841 /apis/apps/v1/namespaces/deployment-5841/deployments/test-rollover-deployment 77a97515-b9f8-4a94-bf16-4d7cae169425 19130700 2 2020-05-25 22:06:37 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004341618 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-05-25 22:06:37 +0000 UTC,LastTransitionTime:2020-05-25 22:06:37 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-574d6dfbff" has successfully progressed.,LastUpdateTime:2020-05-25 22:06:53 +0000 UTC,LastTransitionTime:2020-05-25 22:06:37 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} May 25 22:06:54.110: INFO: New ReplicaSet "test-rollover-deployment-574d6dfbff" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-574d6dfbff deployment-5841 /apis/apps/v1/namespaces/deployment-5841/replicasets/test-rollover-deployment-574d6dfbff 0ba6a10f-f016-4611-b35f-95911300be3c 19130690 2 2020-05-25 22:06:39 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment 77a97515-b9f8-4a94-bf16-4d7cae169425 0xc004341b47 0xc004341b48}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 574d6dfbff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004341bd8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} May 25 22:06:54.110: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": May 25 22:06:54.110: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-5841 /apis/apps/v1/namespaces/deployment-5841/replicasets/test-rollover-controller de7cd8ed-1610-43ae-a910-be81d8bf6113 19130699 2 2020-05-25 22:06:30 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment 77a97515-b9f8-4a94-bf16-4d7cae169425 0xc004341a57 0xc004341a58}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc004341ac8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 25 22:06:54.111: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-f6c94f66c deployment-5841 /apis/apps/v1/namespaces/deployment-5841/replicasets/test-rollover-deployment-f6c94f66c ec9e450b-34aa-4c27-823c-68b36e6ad13b 19130642 2 2020-05-25 22:06:37 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment 77a97515-b9f8-4a94-bf16-4d7cae169425 0xc004341c50 0xc004341c51}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: f6c94f66c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004341cc8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 25 22:06:54.114: INFO: Pod "test-rollover-deployment-574d6dfbff-bxnjx" is available: &Pod{ObjectMeta:{test-rollover-deployment-574d6dfbff-bxnjx test-rollover-deployment-574d6dfbff- deployment-5841 /api/v1/namespaces/deployment-5841/pods/test-rollover-deployment-574d6dfbff-bxnjx 0dd6fad9-7cff-451d-93c1-4b2278bfff47 19130658 0 2020-05-25 22:06:39 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [{apps/v1 ReplicaSet test-rollover-deployment-574d6dfbff 0ba6a10f-f016-4611-b35f-95911300be3c 0xc004780327 0xc004780328}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-l96l9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-l96l9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-l96l9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 22:06:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 22:06:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 22:06:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-25 22:06:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.175,StartTime:2020-05-25 22:06:39 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-25 22:06:42 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://4bcc696302a65251cd97a068b8e664cf89034242d757c62b6732db3016f6b756,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.175,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 22:06:54.114: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-5841" for this suite. • [SLOW TEST:23.598 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":278,"completed":195,"skipped":3024,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 22:06:54.122: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-266.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-266.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-266.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-266.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-266.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-266.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 25 22:07:00.282: INFO: DNS probes using dns-266/dns-test-16224c2f-f2b8-491a-b534-83e31f9d99eb succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 22:07:00.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-266" for this suite. • [SLOW TEST:6.256 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":278,"completed":196,"skipped":3112,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 22:07:00.379: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 22:08:00.775: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5911" for this suite. • [SLOW TEST:60.405 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":278,"completed":197,"skipped":3146,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 22:08:00.785: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod May 25 22:08:00.832: INFO: PodSpec: initContainers in spec.initContainers May 25 22:08:45.707: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-00d23b0c-6380-40c4-8c21-36618737ad9b", GenerateName:"", Namespace:"init-container-9948", SelfLink:"/api/v1/namespaces/init-container-9948/pods/pod-init-00d23b0c-6380-40c4-8c21-36618737ad9b", UID:"b297a15a-e959-4864-a8d6-6d57ca66449b", ResourceVersion:"19131160", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63726041280, loc:(*time.Location)(0x78ee0c0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"832602812"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-qdtcj", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0056df8c0), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-qdtcj", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-qdtcj", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-qdtcj", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc004340038), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"jerma-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc005a610e0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0043400c0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0043400e0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0043400e8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0043400ec), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726041281, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726041281, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726041281, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726041280, loc:(*time.Location)(0x78ee0c0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.8", PodIP:"10.244.2.222", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.2.222"}}, StartTime:(*v1.Time)(0xc000f6a600), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc000f6a9e0), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc001da9960)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://7d91a93e48ef6aaea6566dbd90181eba332948e2e758aad208f7847f25dd0016", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc000f6aa80), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc000f6a8c0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:"", Started:(*bool)(0xc00434016f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 22:08:45.708: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-9948" for this suite. • [SLOW TEST:44.959 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":278,"completed":198,"skipped":3159,"failed":0} SSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 22:08:45.744: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-projected-fbfp STEP: Creating a pod to test atomic-volume-subpath May 25 22:08:46.034: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-fbfp" in namespace "subpath-311" to be "success or failure" May 25 22:08:46.038: INFO: Pod "pod-subpath-test-projected-fbfp": Phase="Pending", Reason="", readiness=false. Elapsed: 3.658118ms May 25 22:08:48.045: INFO: Pod "pod-subpath-test-projected-fbfp": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011018686s May 25 22:08:50.050: INFO: Pod "pod-subpath-test-projected-fbfp": Phase="Running", Reason="", readiness=true. Elapsed: 4.015694096s May 25 22:08:52.053: INFO: Pod "pod-subpath-test-projected-fbfp": Phase="Running", Reason="", readiness=true. Elapsed: 6.019141429s May 25 22:08:54.057: INFO: Pod "pod-subpath-test-projected-fbfp": Phase="Running", Reason="", readiness=true. Elapsed: 8.023314817s May 25 22:08:56.060: INFO: Pod "pod-subpath-test-projected-fbfp": Phase="Running", Reason="", readiness=true. Elapsed: 10.02617029s May 25 22:08:58.064: INFO: Pod "pod-subpath-test-projected-fbfp": Phase="Running", Reason="", readiness=true. Elapsed: 12.03033005s May 25 22:09:00.069: INFO: Pod "pod-subpath-test-projected-fbfp": Phase="Running", Reason="", readiness=true. Elapsed: 14.035375527s May 25 22:09:02.074: INFO: Pod "pod-subpath-test-projected-fbfp": Phase="Running", Reason="", readiness=true. Elapsed: 16.03967804s May 25 22:09:04.078: INFO: Pod "pod-subpath-test-projected-fbfp": Phase="Running", Reason="", readiness=true. Elapsed: 18.044085011s May 25 22:09:06.099: INFO: Pod "pod-subpath-test-projected-fbfp": Phase="Running", Reason="", readiness=true. Elapsed: 20.064843638s May 25 22:09:08.103: INFO: Pod "pod-subpath-test-projected-fbfp": Phase="Running", Reason="", readiness=true. Elapsed: 22.06904838s May 25 22:09:10.190: INFO: Pod "pod-subpath-test-projected-fbfp": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.155382498s STEP: Saw pod success May 25 22:09:10.190: INFO: Pod "pod-subpath-test-projected-fbfp" satisfied condition "success or failure" May 25 22:09:10.193: INFO: Trying to get logs from node jerma-worker2 pod pod-subpath-test-projected-fbfp container test-container-subpath-projected-fbfp: STEP: delete the pod May 25 22:09:10.251: INFO: Waiting for pod pod-subpath-test-projected-fbfp to disappear May 25 22:09:10.261: INFO: Pod pod-subpath-test-projected-fbfp no longer exists STEP: Deleting pod pod-subpath-test-projected-fbfp May 25 22:09:10.261: INFO: Deleting pod "pod-subpath-test-projected-fbfp" in namespace "subpath-311" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 22:09:10.263: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-311" for this suite. • [SLOW TEST:24.527 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":278,"completed":199,"skipped":3166,"failed":0} SSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 22:09:10.271: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod May 25 22:09:15.021: INFO: Successfully updated pod "annotationupdate1cb90d54-98a7-4e88-a1a7-bdd4d3ac9375" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 22:09:19.102: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2897" for this suite. • [SLOW TEST:8.839 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":200,"skipped":3173,"failed":0} SSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 22:09:19.111: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on node default medium May 25 22:09:19.196: INFO: Waiting up to 5m0s for pod "pod-621b1fd8-b7ca-4086-ab68-abee3b966c0a" in namespace "emptydir-8481" to be "success or failure" May 25 22:09:19.202: INFO: Pod "pod-621b1fd8-b7ca-4086-ab68-abee3b966c0a": Phase="Pending", Reason="", readiness=false. Elapsed: 5.412718ms May 25 22:09:21.206: INFO: Pod "pod-621b1fd8-b7ca-4086-ab68-abee3b966c0a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009946671s May 25 22:09:23.211: INFO: Pod "pod-621b1fd8-b7ca-4086-ab68-abee3b966c0a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014862499s STEP: Saw pod success May 25 22:09:23.211: INFO: Pod "pod-621b1fd8-b7ca-4086-ab68-abee3b966c0a" satisfied condition "success or failure" May 25 22:09:23.248: INFO: Trying to get logs from node jerma-worker2 pod pod-621b1fd8-b7ca-4086-ab68-abee3b966c0a container test-container: STEP: delete the pod May 25 22:09:23.273: INFO: Waiting for pod pod-621b1fd8-b7ca-4086-ab68-abee3b966c0a to disappear May 25 22:09:23.296: INFO: Pod pod-621b1fd8-b7ca-4086-ab68-abee3b966c0a no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 22:09:23.296: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8481" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":201,"skipped":3178,"failed":0} SSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 22:09:23.304: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name projected-secret-test-4c99e7ad-e596-4ed7-a9da-5ae7a057f2b5 STEP: Creating a pod to test consume secrets May 25 22:09:23.382: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-eb114073-5431-4961-8344-087a1fe0eff9" in namespace "projected-2492" to be "success or failure" May 25 22:09:23.408: INFO: Pod "pod-projected-secrets-eb114073-5431-4961-8344-087a1fe0eff9": Phase="Pending", Reason="", readiness=false. Elapsed: 26.302132ms May 25 22:09:25.429: INFO: Pod "pod-projected-secrets-eb114073-5431-4961-8344-087a1fe0eff9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047181143s May 25 22:09:27.434: INFO: Pod "pod-projected-secrets-eb114073-5431-4961-8344-087a1fe0eff9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.051429977s STEP: Saw pod success May 25 22:09:27.434: INFO: Pod "pod-projected-secrets-eb114073-5431-4961-8344-087a1fe0eff9" satisfied condition "success or failure" May 25 22:09:27.436: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-eb114073-5431-4961-8344-087a1fe0eff9 container secret-volume-test: STEP: delete the pod May 25 22:09:27.468: INFO: Waiting for pod pod-projected-secrets-eb114073-5431-4961-8344-087a1fe0eff9 to disappear May 25 22:09:27.482: INFO: Pod pod-projected-secrets-eb114073-5431-4961-8344-087a1fe0eff9 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 22:09:27.482: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2492" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":202,"skipped":3181,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 22:09:27.488: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 25 22:09:27.580: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 22:09:33.099: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-648" for this suite. • [SLOW TEST:5.620 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:47 listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":278,"completed":203,"skipped":3188,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 22:09:33.109: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 25 22:09:33.184: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' May 25 22:09:33.320: INFO: stderr: "" May 25 22:09:33.320: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.4\", GitCommit:\"8d8aa39598534325ad77120c120a22b3a990b5ea\", GitTreeState:\"clean\", BuildDate:\"2020-05-06T19:23:43Z\", GoVersion:\"go1.13.10\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.2\", GitCommit:\"59603c6e503c87169aea6106f57b9f242f64df89\", GitTreeState:\"clean\", BuildDate:\"2020-02-07T01:05:17Z\", GoVersion:\"go1.13.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 22:09:33.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4266" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":278,"completed":204,"skipped":3202,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 22:09:33.337: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 25 22:09:33.419: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 22:09:33.452: INFO: Number of nodes with available pods: 0 May 25 22:09:33.452: INFO: Node jerma-worker is running more than one daemon pod May 25 22:09:34.456: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 22:09:34.459: INFO: Number of nodes with available pods: 0 May 25 22:09:34.459: INFO: Node jerma-worker is running more than one daemon pod May 25 22:09:35.456: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 22:09:35.459: INFO: Number of nodes with available pods: 0 May 25 22:09:35.459: INFO: Node jerma-worker is running more than one daemon pod May 25 22:09:36.544: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 22:09:36.548: INFO: Number of nodes with available pods: 0 May 25 22:09:36.548: INFO: Node jerma-worker is running more than one daemon pod May 25 22:09:37.457: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 22:09:37.461: INFO: Number of nodes with available pods: 1 May 25 22:09:37.461: INFO: Node jerma-worker2 is running more than one daemon pod May 25 22:09:38.456: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 22:09:38.459: INFO: Number of nodes with available pods: 2 May 25 22:09:38.459: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. May 25 22:09:38.508: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 22:09:38.512: INFO: Number of nodes with available pods: 1 May 25 22:09:38.512: INFO: Node jerma-worker2 is running more than one daemon pod May 25 22:09:39.519: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 22:09:39.522: INFO: Number of nodes with available pods: 1 May 25 22:09:39.522: INFO: Node jerma-worker2 is running more than one daemon pod May 25 22:09:40.517: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 22:09:40.521: INFO: Number of nodes with available pods: 1 May 25 22:09:40.521: INFO: Node jerma-worker2 is running more than one daemon pod May 25 22:09:41.518: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 22:09:41.521: INFO: Number of nodes with available pods: 1 May 25 22:09:41.521: INFO: Node jerma-worker2 is running more than one daemon pod May 25 22:09:42.518: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 25 22:09:42.521: INFO: Number of nodes with available pods: 2 May 25 22:09:42.521: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3488, will wait for the garbage collector to delete the pods May 25 22:09:42.585: INFO: Deleting DaemonSet.extensions daemon-set took: 6.706717ms May 25 22:09:42.886: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.243335ms May 25 22:09:49.290: INFO: Number of nodes with available pods: 0 May 25 22:09:49.290: INFO: Number of running nodes: 0, number of available pods: 0 May 25 22:09:49.292: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3488/daemonsets","resourceVersion":"19131594"},"items":null} May 25 22:09:49.295: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3488/pods","resourceVersion":"19131594"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 22:09:49.327: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-3488" for this suite. • [SLOW TEST:15.997 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":278,"completed":205,"skipped":3235,"failed":0} SSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 22:09:49.334: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod May 25 22:09:49.375: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 22:09:54.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-606" for this suite. • [SLOW TEST:5.545 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":278,"completed":206,"skipped":3238,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 22:09:54.880: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 25 22:09:55.618: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 25 22:09:57.627: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726041395, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726041395, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726041395, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726041395, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 25 22:10:00.666: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 22:10:01.219: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1847" for this suite. STEP: Destroying namespace "webhook-1847-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.468 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":278,"completed":207,"skipped":3259,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 22:10:01.348: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 25 22:10:01.476: INFO: (0) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 14.45361ms) May 25 22:10:01.479: INFO: (1) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.398608ms) May 25 22:10:01.483: INFO: (2) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.625698ms) May 25 22:10:01.486: INFO: (3) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.885474ms) May 25 22:10:01.489: INFO: (4) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.456352ms) May 25 22:10:01.493: INFO: (5) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.60089ms) May 25 22:10:01.496: INFO: (6) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.954089ms) May 25 22:10:01.499: INFO: (7) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.88855ms) May 25 22:10:01.502: INFO: (8) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.753811ms) May 25 22:10:01.505: INFO: (9) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.130437ms) May 25 22:10:01.508: INFO: (10) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.12172ms) May 25 22:10:01.512: INFO: (11) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.820972ms) May 25 22:10:01.515: INFO: (12) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.191646ms) May 25 22:10:01.519: INFO: (13) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.575489ms) May 25 22:10:01.525: INFO: (14) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 6.198723ms) May 25 22:10:01.530: INFO: (15) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 4.622535ms) May 25 22:10:01.532: INFO: (16) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.474203ms) May 25 22:10:01.535: INFO: (17) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.37743ms) May 25 22:10:01.537: INFO: (18) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.864958ms) May 25 22:10:01.540: INFO: (19) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.416029ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 22:10:01.540: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-5714" for this suite. •{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance]","total":278,"completed":208,"skipped":3281,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 22:10:01.705: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: getting the auto-created API token STEP: reading a file in the container May 25 22:10:06.471: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-3863 pod-service-account-9314f5ca-54b2-4f75-9382-773da569f1f8 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container May 25 22:10:06.727: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-3863 pod-service-account-9314f5ca-54b2-4f75-9382-773da569f1f8 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container May 25 22:10:06.978: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-3863 pod-service-account-9314f5ca-54b2-4f75-9382-773da569f1f8 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 22:10:07.227: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-3863" for this suite. • [SLOW TEST:5.528 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":278,"completed":209,"skipped":3298,"failed":0} S ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 22:10:07.234: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 25 22:10:08.309: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 25 22:10:10.320: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726041408, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726041408, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726041408, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726041408, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 25 22:10:13.382: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 22:10:13.524: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7454" for this suite. STEP: Destroying namespace "webhook-7454-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.386 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":278,"completed":210,"skipped":3299,"failed":0} SSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 22:10:13.620: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on tmpfs May 25 22:10:13.673: INFO: Waiting up to 5m0s for pod "pod-5338f282-502c-4d75-a7ad-056edaee94f3" in namespace "emptydir-8310" to be "success or failure" May 25 22:10:13.677: INFO: Pod "pod-5338f282-502c-4d75-a7ad-056edaee94f3": Phase="Pending", Reason="", readiness=false. Elapsed: 3.69929ms May 25 22:10:15.741: INFO: Pod "pod-5338f282-502c-4d75-a7ad-056edaee94f3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067521985s May 25 22:10:17.745: INFO: Pod "pod-5338f282-502c-4d75-a7ad-056edaee94f3": Phase="Running", Reason="", readiness=true. Elapsed: 4.071716789s May 25 22:10:19.749: INFO: Pod "pod-5338f282-502c-4d75-a7ad-056edaee94f3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.076153977s STEP: Saw pod success May 25 22:10:19.749: INFO: Pod "pod-5338f282-502c-4d75-a7ad-056edaee94f3" satisfied condition "success or failure" May 25 22:10:19.753: INFO: Trying to get logs from node jerma-worker2 pod pod-5338f282-502c-4d75-a7ad-056edaee94f3 container test-container: STEP: delete the pod May 25 22:10:19.779: INFO: Waiting for pod pod-5338f282-502c-4d75-a7ad-056edaee94f3 to disappear May 25 22:10:19.783: INFO: Pod pod-5338f282-502c-4d75-a7ad-056edaee94f3 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 22:10:19.783: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8310" for this suite. • [SLOW TEST:6.170 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":211,"skipped":3305,"failed":0} SSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 22:10:19.790: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation May 25 22:10:19.867: INFO: >>> kubeConfig: /root/.kube/config May 25 22:10:22.797: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 22:10:32.210: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-4000" for this suite. • [SLOW TEST:12.427 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":278,"completed":212,"skipped":3308,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 22:10:32.218: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 25 22:10:32.731: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 25 22:10:34.740: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726041432, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726041432, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726041432, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726041432, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 25 22:10:37.778: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 25 22:10:37.781: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-5891-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 22:10:38.914: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4754" for this suite. STEP: Destroying namespace "webhook-4754-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.843 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":278,"completed":213,"skipped":3328,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 22:10:39.062: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0525 22:10:40.143785 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 25 22:10:40.143: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 22:10:40.143: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-2817" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":278,"completed":214,"skipped":3412,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 22:10:40.151: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications May 25 22:10:40.235: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-7756 /api/v1/namespaces/watch-7756/configmaps/e2e-watch-test-watch-closed 66f035cc-5616-483e-971d-8513dfd71c04 19132120 0 2020-05-25 22:10:40 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} May 25 22:10:40.235: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-7756 /api/v1/namespaces/watch-7756/configmaps/e2e-watch-test-watch-closed 66f035cc-5616-483e-971d-8513dfd71c04 19132121 0 2020-05-25 22:10:40 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed May 25 22:10:40.246: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-7756 /api/v1/namespaces/watch-7756/configmaps/e2e-watch-test-watch-closed 66f035cc-5616-483e-971d-8513dfd71c04 19132122 0 2020-05-25 22:10:40 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 25 22:10:40.246: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-7756 /api/v1/namespaces/watch-7756/configmaps/e2e-watch-test-watch-closed 66f035cc-5616-483e-971d-8513dfd71c04 19132123 0 2020-05-25 22:10:40 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 22:10:40.246: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-7756" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":278,"completed":215,"skipped":3440,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 22:10:40.254: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-5687c469-ab45-4ba6-b212-0a5a299bdb96 STEP: Creating a pod to test consume configMaps May 25 22:10:40.353: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-75fa6dd7-5b29-49e0-88c8-66d58b0cad14" in namespace "projected-8101" to be "success or failure" May 25 22:10:40.377: INFO: Pod "pod-projected-configmaps-75fa6dd7-5b29-49e0-88c8-66d58b0cad14": Phase="Pending", Reason="", readiness=false. Elapsed: 24.235573ms May 25 22:10:42.526: INFO: Pod "pod-projected-configmaps-75fa6dd7-5b29-49e0-88c8-66d58b0cad14": Phase="Pending", Reason="", readiness=false. Elapsed: 2.173368606s May 25 22:10:44.530: INFO: Pod "pod-projected-configmaps-75fa6dd7-5b29-49e0-88c8-66d58b0cad14": Phase="Running", Reason="", readiness=true. Elapsed: 4.177561912s May 25 22:10:46.535: INFO: Pod "pod-projected-configmaps-75fa6dd7-5b29-49e0-88c8-66d58b0cad14": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.182074024s STEP: Saw pod success May 25 22:10:46.535: INFO: Pod "pod-projected-configmaps-75fa6dd7-5b29-49e0-88c8-66d58b0cad14" satisfied condition "success or failure" May 25 22:10:46.538: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-75fa6dd7-5b29-49e0-88c8-66d58b0cad14 container projected-configmap-volume-test: STEP: delete the pod May 25 22:10:46.567: INFO: Waiting for pod pod-projected-configmaps-75fa6dd7-5b29-49e0-88c8-66d58b0cad14 to disappear May 25 22:10:46.570: INFO: Pod pod-projected-configmaps-75fa6dd7-5b29-49e0-88c8-66d58b0cad14 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 22:10:46.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8101" for this suite. • [SLOW TEST:6.324 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":216,"skipped":3455,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 22:10:46.579: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on tmpfs May 25 22:10:46.681: INFO: Waiting up to 5m0s for pod "pod-f76cef3f-6b13-48ec-93dc-38f833a5f810" in namespace "emptydir-5060" to be "success or failure" May 25 22:10:46.690: INFO: Pod "pod-f76cef3f-6b13-48ec-93dc-38f833a5f810": Phase="Pending", Reason="", readiness=false. Elapsed: 8.984922ms May 25 22:10:48.711: INFO: Pod "pod-f76cef3f-6b13-48ec-93dc-38f833a5f810": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029959074s May 25 22:10:50.730: INFO: Pod "pod-f76cef3f-6b13-48ec-93dc-38f833a5f810": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.048162243s STEP: Saw pod success May 25 22:10:50.730: INFO: Pod "pod-f76cef3f-6b13-48ec-93dc-38f833a5f810" satisfied condition "success or failure" May 25 22:10:50.733: INFO: Trying to get logs from node jerma-worker2 pod pod-f76cef3f-6b13-48ec-93dc-38f833a5f810 container test-container: STEP: delete the pod May 25 22:10:50.752: INFO: Waiting for pod pod-f76cef3f-6b13-48ec-93dc-38f833a5f810 to disappear May 25 22:10:50.762: INFO: Pod pod-f76cef3f-6b13-48ec-93dc-38f833a5f810 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 22:10:50.762: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5060" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":217,"skipped":3471,"failed":0} S ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 22:10:50.769: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-c3352f42-1abb-4cc3-9ff3-c27dba723905 STEP: Creating a pod to test consume configMaps May 25 22:10:50.857: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-a6055748-4b70-477c-ac4b-84ce4ba0e6ad" in namespace "projected-3051" to be "success or failure" May 25 22:10:50.874: INFO: Pod "pod-projected-configmaps-a6055748-4b70-477c-ac4b-84ce4ba0e6ad": Phase="Pending", Reason="", readiness=false. Elapsed: 16.844389ms May 25 22:10:52.878: INFO: Pod "pod-projected-configmaps-a6055748-4b70-477c-ac4b-84ce4ba0e6ad": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020359147s May 25 22:10:54.882: INFO: Pod "pod-projected-configmaps-a6055748-4b70-477c-ac4b-84ce4ba0e6ad": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024401859s May 25 22:10:56.886: INFO: Pod "pod-projected-configmaps-a6055748-4b70-477c-ac4b-84ce4ba0e6ad": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.028696488s STEP: Saw pod success May 25 22:10:56.886: INFO: Pod "pod-projected-configmaps-a6055748-4b70-477c-ac4b-84ce4ba0e6ad" satisfied condition "success or failure" May 25 22:10:56.890: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-a6055748-4b70-477c-ac4b-84ce4ba0e6ad container projected-configmap-volume-test: STEP: delete the pod May 25 22:10:56.906: INFO: Waiting for pod pod-projected-configmaps-a6055748-4b70-477c-ac4b-84ce4ba0e6ad to disappear May 25 22:10:56.910: INFO: Pod pod-projected-configmaps-a6055748-4b70-477c-ac4b-84ce4ba0e6ad no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 22:10:56.910: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3051" for this suite. • [SLOW TEST:6.149 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":278,"completed":218,"skipped":3472,"failed":0} SSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 22:10:56.919: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-35fef9b4-7bc2-49ff-9a7a-343e9bda8fb6 STEP: Creating a pod to test consume secrets May 25 22:10:56.995: INFO: Waiting up to 5m0s for pod "pod-secrets-ef77942c-b3f1-432f-8c48-9deb254da447" in namespace "secrets-6075" to be "success or failure" May 25 22:10:57.034: INFO: Pod "pod-secrets-ef77942c-b3f1-432f-8c48-9deb254da447": Phase="Pending", Reason="", readiness=false. Elapsed: 38.177624ms May 25 22:10:59.038: INFO: Pod "pod-secrets-ef77942c-b3f1-432f-8c48-9deb254da447": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042098814s May 25 22:11:01.043: INFO: Pod "pod-secrets-ef77942c-b3f1-432f-8c48-9deb254da447": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.047260046s STEP: Saw pod success May 25 22:11:01.043: INFO: Pod "pod-secrets-ef77942c-b3f1-432f-8c48-9deb254da447" satisfied condition "success or failure" May 25 22:11:01.046: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-ef77942c-b3f1-432f-8c48-9deb254da447 container secret-volume-test: STEP: delete the pod May 25 22:11:01.067: INFO: Waiting for pod pod-secrets-ef77942c-b3f1-432f-8c48-9deb254da447 to disappear May 25 22:11:01.071: INFO: Pod pod-secrets-ef77942c-b3f1-432f-8c48-9deb254da447 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 22:11:01.072: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6075" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":219,"skipped":3477,"failed":0} SSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 22:11:01.079: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod May 25 22:11:05.725: INFO: Successfully updated pod "pod-update-activedeadlineseconds-6313bb54-6f46-4d50-aefc-76ee9388ee64" May 25 22:11:05.725: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-6313bb54-6f46-4d50-aefc-76ee9388ee64" in namespace "pods-6001" to be "terminated due to deadline exceeded" May 25 22:11:05.733: INFO: Pod "pod-update-activedeadlineseconds-6313bb54-6f46-4d50-aefc-76ee9388ee64": Phase="Running", Reason="", readiness=true. Elapsed: 7.520238ms May 25 22:11:07.737: INFO: Pod "pod-update-activedeadlineseconds-6313bb54-6f46-4d50-aefc-76ee9388ee64": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.011909181s May 25 22:11:07.737: INFO: Pod "pod-update-activedeadlineseconds-6313bb54-6f46-4d50-aefc-76ee9388ee64" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 22:11:07.737: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6001" for this suite. • [SLOW TEST:6.666 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":278,"completed":220,"skipped":3491,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 22:11:07.746: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0525 22:11:48.513090 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 25 22:11:48.513: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 22:11:48.513: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-5544" for this suite. • [SLOW TEST:40.776 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":278,"completed":221,"skipped":3536,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 22:11:48.522: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 25 22:11:49.019: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 25 22:11:51.026: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726041509, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726041509, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726041509, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726041509, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 25 22:11:54.084: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 22:12:04.347: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5201" for this suite. STEP: Destroying namespace "webhook-5201-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:15.908 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":278,"completed":222,"skipped":3591,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 22:12:04.431: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 22:12:04.561: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7833" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 •{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":278,"completed":223,"skipped":3626,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 22:12:04.567: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 25 22:12:04.725: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"18bf606a-d2d4-4607-83ee-fd9549139234", Controller:(*bool)(0xc00258b49a), BlockOwnerDeletion:(*bool)(0xc00258b49b)}} May 25 22:12:04.743: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"1f9e8592-6e51-4c40-bd56-f618497a12b8", Controller:(*bool)(0xc00258b6fa), BlockOwnerDeletion:(*bool)(0xc00258b6fb)}} May 25 22:12:04.776: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"9b27d479-a33f-49c4-aa48-9bbee46004b4", Controller:(*bool)(0xc00258b90a), BlockOwnerDeletion:(*bool)(0xc00258b90b)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 22:12:09.882: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4758" for this suite. • [SLOW TEST:5.324 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":278,"completed":224,"skipped":3642,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 22:12:09.892: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook May 25 22:12:18.016: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 25 22:12:18.034: INFO: Pod pod-with-prestop-exec-hook still exists May 25 22:12:20.035: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 25 22:12:20.039: INFO: Pod pod-with-prestop-exec-hook still exists May 25 22:12:22.035: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 25 22:12:22.039: INFO: Pod pod-with-prestop-exec-hook still exists May 25 22:12:24.035: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 25 22:12:24.039: INFO: Pod pod-with-prestop-exec-hook still exists May 25 22:12:26.035: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 25 22:12:26.039: INFO: Pod pod-with-prestop-exec-hook still exists May 25 22:12:28.035: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 25 22:12:28.039: INFO: Pod pod-with-prestop-exec-hook still exists May 25 22:12:30.035: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 25 22:12:30.039: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 22:12:30.063: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-640" for this suite. • [SLOW TEST:20.178 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":278,"completed":225,"skipped":3650,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 22:12:30.070: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8635.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-8635.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8635.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-8635.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-8635.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-8635.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-8635.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-8635.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-8635.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-8635.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-8635.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-8635.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8635.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 83.152.109.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.109.152.83_udp@PTR;check="$$(dig +tcp +noall +answer +search 83.152.109.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.109.152.83_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8635.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-8635.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8635.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-8635.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-8635.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-8635.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-8635.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-8635.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-8635.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-8635.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-8635.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-8635.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8635.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 83.152.109.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.109.152.83_udp@PTR;check="$$(dig +tcp +noall +answer +search 83.152.109.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.109.152.83_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 25 22:12:36.438: INFO: Unable to read wheezy_udp@dns-test-service.dns-8635.svc.cluster.local from pod dns-8635/dns-test-ec5522ea-5d96-46e0-a6db-ff9c78287a1a: the server could not find the requested resource (get pods dns-test-ec5522ea-5d96-46e0-a6db-ff9c78287a1a) May 25 22:12:36.441: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8635.svc.cluster.local from pod dns-8635/dns-test-ec5522ea-5d96-46e0-a6db-ff9c78287a1a: the server could not find the requested resource (get pods dns-test-ec5522ea-5d96-46e0-a6db-ff9c78287a1a) May 25 22:12:36.443: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8635.svc.cluster.local from pod dns-8635/dns-test-ec5522ea-5d96-46e0-a6db-ff9c78287a1a: the server could not find the requested resource (get pods dns-test-ec5522ea-5d96-46e0-a6db-ff9c78287a1a) May 25 22:12:36.446: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8635.svc.cluster.local from pod dns-8635/dns-test-ec5522ea-5d96-46e0-a6db-ff9c78287a1a: the server could not find the requested resource (get pods dns-test-ec5522ea-5d96-46e0-a6db-ff9c78287a1a) May 25 22:12:36.464: INFO: Unable to read jessie_udp@dns-test-service.dns-8635.svc.cluster.local from pod dns-8635/dns-test-ec5522ea-5d96-46e0-a6db-ff9c78287a1a: the server could not find the requested resource (get pods dns-test-ec5522ea-5d96-46e0-a6db-ff9c78287a1a) May 25 22:12:36.466: INFO: Unable to read jessie_tcp@dns-test-service.dns-8635.svc.cluster.local from pod dns-8635/dns-test-ec5522ea-5d96-46e0-a6db-ff9c78287a1a: the server could not find the requested resource (get pods dns-test-ec5522ea-5d96-46e0-a6db-ff9c78287a1a) May 25 22:12:36.468: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8635.svc.cluster.local from pod dns-8635/dns-test-ec5522ea-5d96-46e0-a6db-ff9c78287a1a: the server could not find the requested resource (get pods dns-test-ec5522ea-5d96-46e0-a6db-ff9c78287a1a) May 25 22:12:36.470: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8635.svc.cluster.local from pod dns-8635/dns-test-ec5522ea-5d96-46e0-a6db-ff9c78287a1a: the server could not find the requested resource (get pods dns-test-ec5522ea-5d96-46e0-a6db-ff9c78287a1a) May 25 22:12:36.483: INFO: Lookups using dns-8635/dns-test-ec5522ea-5d96-46e0-a6db-ff9c78287a1a failed for: [wheezy_udp@dns-test-service.dns-8635.svc.cluster.local wheezy_tcp@dns-test-service.dns-8635.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8635.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8635.svc.cluster.local jessie_udp@dns-test-service.dns-8635.svc.cluster.local jessie_tcp@dns-test-service.dns-8635.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8635.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8635.svc.cluster.local] May 25 22:12:41.488: INFO: Unable to read wheezy_udp@dns-test-service.dns-8635.svc.cluster.local from pod dns-8635/dns-test-ec5522ea-5d96-46e0-a6db-ff9c78287a1a: the server could not find the requested resource (get pods dns-test-ec5522ea-5d96-46e0-a6db-ff9c78287a1a) May 25 22:12:41.491: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8635.svc.cluster.local from pod dns-8635/dns-test-ec5522ea-5d96-46e0-a6db-ff9c78287a1a: the server could not find the requested resource (get pods dns-test-ec5522ea-5d96-46e0-a6db-ff9c78287a1a) May 25 22:12:41.495: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8635.svc.cluster.local from pod dns-8635/dns-test-ec5522ea-5d96-46e0-a6db-ff9c78287a1a: the server could not find the requested resource (get pods dns-test-ec5522ea-5d96-46e0-a6db-ff9c78287a1a) May 25 22:12:41.498: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8635.svc.cluster.local from pod dns-8635/dns-test-ec5522ea-5d96-46e0-a6db-ff9c78287a1a: the server could not find the requested resource (get pods dns-test-ec5522ea-5d96-46e0-a6db-ff9c78287a1a) May 25 22:12:41.519: INFO: Unable to read jessie_udp@dns-test-service.dns-8635.svc.cluster.local from pod dns-8635/dns-test-ec5522ea-5d96-46e0-a6db-ff9c78287a1a: the server could not find the requested resource (get pods dns-test-ec5522ea-5d96-46e0-a6db-ff9c78287a1a) May 25 22:12:41.522: INFO: Unable to read jessie_tcp@dns-test-service.dns-8635.svc.cluster.local from pod dns-8635/dns-test-ec5522ea-5d96-46e0-a6db-ff9c78287a1a: the server could not find the requested resource (get pods dns-test-ec5522ea-5d96-46e0-a6db-ff9c78287a1a) May 25 22:12:41.524: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8635.svc.cluster.local from pod dns-8635/dns-test-ec5522ea-5d96-46e0-a6db-ff9c78287a1a: the server could not find the requested resource (get pods dns-test-ec5522ea-5d96-46e0-a6db-ff9c78287a1a) May 25 22:12:41.526: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8635.svc.cluster.local from pod dns-8635/dns-test-ec5522ea-5d96-46e0-a6db-ff9c78287a1a: the server could not find the requested resource (get pods dns-test-ec5522ea-5d96-46e0-a6db-ff9c78287a1a) May 25 22:12:41.543: INFO: Lookups using dns-8635/dns-test-ec5522ea-5d96-46e0-a6db-ff9c78287a1a failed for: [wheezy_udp@dns-test-service.dns-8635.svc.cluster.local wheezy_tcp@dns-test-service.dns-8635.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8635.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8635.svc.cluster.local jessie_udp@dns-test-service.dns-8635.svc.cluster.local jessie_tcp@dns-test-service.dns-8635.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8635.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8635.svc.cluster.local] May 25 22:12:46.487: INFO: Unable to read wheezy_udp@dns-test-service.dns-8635.svc.cluster.local from pod dns-8635/dns-test-ec5522ea-5d96-46e0-a6db-ff9c78287a1a: the server could not find the requested resource (get pods dns-test-ec5522ea-5d96-46e0-a6db-ff9c78287a1a) May 25 22:12:46.494: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8635.svc.cluster.local from pod dns-8635/dns-test-ec5522ea-5d96-46e0-a6db-ff9c78287a1a: the server could not find the requested resource (get pods dns-test-ec5522ea-5d96-46e0-a6db-ff9c78287a1a) May 25 22:12:46.497: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8635.svc.cluster.local from pod dns-8635/dns-test-ec5522ea-5d96-46e0-a6db-ff9c78287a1a: the server could not find the requested resource (get pods dns-test-ec5522ea-5d96-46e0-a6db-ff9c78287a1a) May 25 22:12:46.500: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8635.svc.cluster.local from pod dns-8635/dns-test-ec5522ea-5d96-46e0-a6db-ff9c78287a1a: the server could not find the requested resource (get pods dns-test-ec5522ea-5d96-46e0-a6db-ff9c78287a1a) May 25 22:12:46.516: INFO: Unable to read jessie_udp@dns-test-service.dns-8635.svc.cluster.local from pod dns-8635/dns-test-ec5522ea-5d96-46e0-a6db-ff9c78287a1a: the server could not find the requested resource (get pods dns-test-ec5522ea-5d96-46e0-a6db-ff9c78287a1a) May 25 22:12:46.519: INFO: Unable to read jessie_tcp@dns-test-service.dns-8635.svc.cluster.local from pod dns-8635/dns-test-ec5522ea-5d96-46e0-a6db-ff9c78287a1a: the server could not find the requested resource (get pods dns-test-ec5522ea-5d96-46e0-a6db-ff9c78287a1a) May 25 22:12:46.521: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8635.svc.cluster.local from pod dns-8635/dns-test-ec5522ea-5d96-46e0-a6db-ff9c78287a1a: the server could not find the requested resource (get pods dns-test-ec5522ea-5d96-46e0-a6db-ff9c78287a1a) May 25 22:12:46.524: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8635.svc.cluster.local from pod dns-8635/dns-test-ec5522ea-5d96-46e0-a6db-ff9c78287a1a: the server could not find the requested resource (get pods dns-test-ec5522ea-5d96-46e0-a6db-ff9c78287a1a) May 25 22:12:46.538: INFO: Lookups using dns-8635/dns-test-ec5522ea-5d96-46e0-a6db-ff9c78287a1a failed for: [wheezy_udp@dns-test-service.dns-8635.svc.cluster.local wheezy_tcp@dns-test-service.dns-8635.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8635.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8635.svc.cluster.local jessie_udp@dns-test-service.dns-8635.svc.cluster.local jessie_tcp@dns-test-service.dns-8635.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8635.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8635.svc.cluster.local] May 25 22:12:51.488: INFO: Unable to read wheezy_udp@dns-test-service.dns-8635.svc.cluster.local from pod dns-8635/dns-test-ec5522ea-5d96-46e0-a6db-ff9c78287a1a: the server could not find the requested resource (get pods dns-test-ec5522ea-5d96-46e0-a6db-ff9c78287a1a) May 25 22:12:51.492: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8635.svc.cluster.local from pod dns-8635/dns-test-ec5522ea-5d96-46e0-a6db-ff9c78287a1a: the server could not find the requested resource (get pods dns-test-ec5522ea-5d96-46e0-a6db-ff9c78287a1a) May 25 22:12:51.496: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8635.svc.cluster.local from pod dns-8635/dns-test-ec5522ea-5d96-46e0-a6db-ff9c78287a1a: the server could not find the requested resource (get pods dns-test-ec5522ea-5d96-46e0-a6db-ff9c78287a1a) May 25 22:12:51.499: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8635.svc.cluster.local from pod dns-8635/dns-test-ec5522ea-5d96-46e0-a6db-ff9c78287a1a: the server could not find the requested resource (get pods dns-test-ec5522ea-5d96-46e0-a6db-ff9c78287a1a) May 25 22:12:51.522: INFO: Unable to read jessie_udp@dns-test-service.dns-8635.svc.cluster.local from pod dns-8635/dns-test-ec5522ea-5d96-46e0-a6db-ff9c78287a1a: the server could not find the requested resource (get pods dns-test-ec5522ea-5d96-46e0-a6db-ff9c78287a1a) May 25 22:12:51.525: INFO: Unable to read jessie_tcp@dns-test-service.dns-8635.svc.cluster.local from pod dns-8635/dns-test-ec5522ea-5d96-46e0-a6db-ff9c78287a1a: the server could not find the requested resource (get pods dns-test-ec5522ea-5d96-46e0-a6db-ff9c78287a1a) May 25 22:12:51.528: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8635.svc.cluster.local from pod dns-8635/dns-test-ec5522ea-5d96-46e0-a6db-ff9c78287a1a: the server could not find the requested resource (get pods dns-test-ec5522ea-5d96-46e0-a6db-ff9c78287a1a) May 25 22:12:51.532: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8635.svc.cluster.local from pod dns-8635/dns-test-ec5522ea-5d96-46e0-a6db-ff9c78287a1a: the server could not find the requested resource (get pods dns-test-ec5522ea-5d96-46e0-a6db-ff9c78287a1a) May 25 22:12:51.557: INFO: Lookups using dns-8635/dns-test-ec5522ea-5d96-46e0-a6db-ff9c78287a1a failed for: [wheezy_udp@dns-test-service.dns-8635.svc.cluster.local wheezy_tcp@dns-test-service.dns-8635.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8635.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8635.svc.cluster.local jessie_udp@dns-test-service.dns-8635.svc.cluster.local jessie_tcp@dns-test-service.dns-8635.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8635.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8635.svc.cluster.local] May 25 22:12:56.488: INFO: Unable to read wheezy_udp@dns-test-service.dns-8635.svc.cluster.local from pod dns-8635/dns-test-ec5522ea-5d96-46e0-a6db-ff9c78287a1a: the server could not find the requested resource (get pods dns-test-ec5522ea-5d96-46e0-a6db-ff9c78287a1a) May 25 22:12:56.492: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8635.svc.cluster.local from pod dns-8635/dns-test-ec5522ea-5d96-46e0-a6db-ff9c78287a1a: the server could not find the requested resource (get pods dns-test-ec5522ea-5d96-46e0-a6db-ff9c78287a1a) May 25 22:12:56.496: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8635.svc.cluster.local from pod dns-8635/dns-test-ec5522ea-5d96-46e0-a6db-ff9c78287a1a: the server could not find the requested resource (get pods dns-test-ec5522ea-5d96-46e0-a6db-ff9c78287a1a) May 25 22:12:56.499: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8635.svc.cluster.local from pod dns-8635/dns-test-ec5522ea-5d96-46e0-a6db-ff9c78287a1a: the server could not find the requested resource (get pods dns-test-ec5522ea-5d96-46e0-a6db-ff9c78287a1a) May 25 22:12:56.525: INFO: Unable to read jessie_udp@dns-test-service.dns-8635.svc.cluster.local from pod dns-8635/dns-test-ec5522ea-5d96-46e0-a6db-ff9c78287a1a: the server could not find the requested resource (get pods dns-test-ec5522ea-5d96-46e0-a6db-ff9c78287a1a) May 25 22:12:56.527: INFO: Unable to read jessie_tcp@dns-test-service.dns-8635.svc.cluster.local from pod dns-8635/dns-test-ec5522ea-5d96-46e0-a6db-ff9c78287a1a: the server could not find the requested resource (get pods dns-test-ec5522ea-5d96-46e0-a6db-ff9c78287a1a) May 25 22:12:56.529: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8635.svc.cluster.local from pod dns-8635/dns-test-ec5522ea-5d96-46e0-a6db-ff9c78287a1a: the server could not find the requested resource (get pods dns-test-ec5522ea-5d96-46e0-a6db-ff9c78287a1a) May 25 22:12:56.531: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8635.svc.cluster.local from pod dns-8635/dns-test-ec5522ea-5d96-46e0-a6db-ff9c78287a1a: the server could not find the requested resource (get pods dns-test-ec5522ea-5d96-46e0-a6db-ff9c78287a1a) May 25 22:12:56.544: INFO: Lookups using dns-8635/dns-test-ec5522ea-5d96-46e0-a6db-ff9c78287a1a failed for: [wheezy_udp@dns-test-service.dns-8635.svc.cluster.local wheezy_tcp@dns-test-service.dns-8635.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8635.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8635.svc.cluster.local jessie_udp@dns-test-service.dns-8635.svc.cluster.local jessie_tcp@dns-test-service.dns-8635.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8635.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8635.svc.cluster.local] May 25 22:13:01.488: INFO: Unable to read wheezy_udp@dns-test-service.dns-8635.svc.cluster.local from pod dns-8635/dns-test-ec5522ea-5d96-46e0-a6db-ff9c78287a1a: the server could not find the requested resource (get pods dns-test-ec5522ea-5d96-46e0-a6db-ff9c78287a1a) May 25 22:13:01.491: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8635.svc.cluster.local from pod dns-8635/dns-test-ec5522ea-5d96-46e0-a6db-ff9c78287a1a: the server could not find the requested resource (get pods dns-test-ec5522ea-5d96-46e0-a6db-ff9c78287a1a) May 25 22:13:01.494: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8635.svc.cluster.local from pod dns-8635/dns-test-ec5522ea-5d96-46e0-a6db-ff9c78287a1a: the server could not find the requested resource (get pods dns-test-ec5522ea-5d96-46e0-a6db-ff9c78287a1a) May 25 22:13:01.498: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8635.svc.cluster.local from pod dns-8635/dns-test-ec5522ea-5d96-46e0-a6db-ff9c78287a1a: the server could not find the requested resource (get pods dns-test-ec5522ea-5d96-46e0-a6db-ff9c78287a1a) May 25 22:13:01.518: INFO: Unable to read jessie_udp@dns-test-service.dns-8635.svc.cluster.local from pod dns-8635/dns-test-ec5522ea-5d96-46e0-a6db-ff9c78287a1a: the server could not find the requested resource (get pods dns-test-ec5522ea-5d96-46e0-a6db-ff9c78287a1a) May 25 22:13:01.520: INFO: Unable to read jessie_tcp@dns-test-service.dns-8635.svc.cluster.local from pod dns-8635/dns-test-ec5522ea-5d96-46e0-a6db-ff9c78287a1a: the server could not find the requested resource (get pods dns-test-ec5522ea-5d96-46e0-a6db-ff9c78287a1a) May 25 22:13:01.522: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8635.svc.cluster.local from pod dns-8635/dns-test-ec5522ea-5d96-46e0-a6db-ff9c78287a1a: the server could not find the requested resource (get pods dns-test-ec5522ea-5d96-46e0-a6db-ff9c78287a1a) May 25 22:13:01.524: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8635.svc.cluster.local from pod dns-8635/dns-test-ec5522ea-5d96-46e0-a6db-ff9c78287a1a: the server could not find the requested resource (get pods dns-test-ec5522ea-5d96-46e0-a6db-ff9c78287a1a) May 25 22:13:01.539: INFO: Lookups using dns-8635/dns-test-ec5522ea-5d96-46e0-a6db-ff9c78287a1a failed for: [wheezy_udp@dns-test-service.dns-8635.svc.cluster.local wheezy_tcp@dns-test-service.dns-8635.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8635.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8635.svc.cluster.local jessie_udp@dns-test-service.dns-8635.svc.cluster.local jessie_tcp@dns-test-service.dns-8635.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8635.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8635.svc.cluster.local] May 25 22:13:06.546: INFO: DNS probes using dns-8635/dns-test-ec5522ea-5d96-46e0-a6db-ff9c78287a1a succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 22:13:07.420: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8635" for this suite. • [SLOW TEST:37.361 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":278,"completed":226,"skipped":3667,"failed":0} S ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 22:13:07.431: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 25 22:13:07.528: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6144c475-0b89-47f5-8f27-51be1ca1053c" in namespace "downward-api-9284" to be "success or failure" May 25 22:13:07.533: INFO: Pod "downwardapi-volume-6144c475-0b89-47f5-8f27-51be1ca1053c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.474286ms May 25 22:13:09.537: INFO: Pod "downwardapi-volume-6144c475-0b89-47f5-8f27-51be1ca1053c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00906975s May 25 22:13:11.541: INFO: Pod "downwardapi-volume-6144c475-0b89-47f5-8f27-51be1ca1053c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012776667s STEP: Saw pod success May 25 22:13:11.541: INFO: Pod "downwardapi-volume-6144c475-0b89-47f5-8f27-51be1ca1053c" satisfied condition "success or failure" May 25 22:13:11.544: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-6144c475-0b89-47f5-8f27-51be1ca1053c container client-container: STEP: delete the pod May 25 22:13:11.607: INFO: Waiting for pod downwardapi-volume-6144c475-0b89-47f5-8f27-51be1ca1053c to disappear May 25 22:13:11.617: INFO: Pod downwardapi-volume-6144c475-0b89-47f5-8f27-51be1ca1053c no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 22:13:11.617: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9284" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":227,"skipped":3668,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 22:13:11.626: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars May 25 22:13:11.691: INFO: Waiting up to 5m0s for pod "downward-api-ea7c8e4d-8c1b-43b8-be17-d1f933038588" in namespace "downward-api-1292" to be "success or failure" May 25 22:13:11.810: INFO: Pod "downward-api-ea7c8e4d-8c1b-43b8-be17-d1f933038588": Phase="Pending", Reason="", readiness=false. Elapsed: 118.710372ms May 25 22:13:13.894: INFO: Pod "downward-api-ea7c8e4d-8c1b-43b8-be17-d1f933038588": Phase="Pending", Reason="", readiness=false. Elapsed: 2.202600775s May 25 22:13:15.897: INFO: Pod "downward-api-ea7c8e4d-8c1b-43b8-be17-d1f933038588": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.206245886s STEP: Saw pod success May 25 22:13:15.897: INFO: Pod "downward-api-ea7c8e4d-8c1b-43b8-be17-d1f933038588" satisfied condition "success or failure" May 25 22:13:15.900: INFO: Trying to get logs from node jerma-worker pod downward-api-ea7c8e4d-8c1b-43b8-be17-d1f933038588 container dapi-container: STEP: delete the pod May 25 22:13:15.967: INFO: Waiting for pod downward-api-ea7c8e4d-8c1b-43b8-be17-d1f933038588 to disappear May 25 22:13:15.969: INFO: Pod downward-api-ea7c8e4d-8c1b-43b8-be17-d1f933038588 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 22:13:15.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1292" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":278,"completed":228,"skipped":3709,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 22:13:15.976: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 25 22:13:16.720: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 25 22:13:18.731: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726041596, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726041596, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726041596, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726041596, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 25 22:13:21.834: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook May 25 22:13:21.856: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 22:13:22.010: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2008" for this suite. STEP: Destroying namespace "webhook-2008-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.196 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":278,"completed":229,"skipped":3740,"failed":0} SSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 22:13:22.172: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 22:13:26.316: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-5158" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":230,"skipped":3744,"failed":0} SSSSS ------------------------------ [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 22:13:26.324: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating all guestbook components May 25 22:13:26.481: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-slave labels: app: agnhost role: slave tier: backend spec: ports: - port: 6379 selector: app: agnhost role: slave tier: backend May 25 22:13:26.481: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6833' May 25 22:13:26.809: INFO: stderr: "" May 25 22:13:26.809: INFO: stdout: "service/agnhost-slave created\n" May 25 22:13:26.810: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-master labels: app: agnhost role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: master tier: backend May 25 22:13:26.810: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6833' May 25 22:13:27.635: INFO: stderr: "" May 25 22:13:27.635: INFO: stdout: "service/agnhost-master created\n" May 25 22:13:27.635: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend May 25 22:13:27.635: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6833' May 25 22:13:28.445: INFO: stderr: "" May 25 22:13:28.445: INFO: stdout: "service/frontend created\n" May 25 22:13:28.445: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 May 25 22:13:28.445: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6833' May 25 22:13:28.747: INFO: stderr: "" May 25 22:13:28.747: INFO: stdout: "deployment.apps/frontend created\n" May 25 22:13:28.747: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-master spec: replicas: 1 selector: matchLabels: app: agnhost role: master tier: backend template: metadata: labels: app: agnhost role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 May 25 22:13:28.748: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6833' May 25 22:13:29.079: INFO: stderr: "" May 25 22:13:29.079: INFO: stdout: "deployment.apps/agnhost-master created\n" May 25 22:13:29.079: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-slave spec: replicas: 2 selector: matchLabels: app: agnhost role: slave tier: backend template: metadata: labels: app: agnhost role: slave tier: backend spec: containers: - name: slave image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 May 25 22:13:29.079: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6833' May 25 22:13:29.896: INFO: stderr: "" May 25 22:13:29.896: INFO: stdout: "deployment.apps/agnhost-slave created\n" STEP: validating guestbook app May 25 22:13:29.896: INFO: Waiting for all frontend pods to be Running. May 25 22:13:39.947: INFO: Waiting for frontend to serve content. May 25 22:13:39.958: INFO: Trying to add a new entry to the guestbook. May 25 22:13:39.972: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources May 25 22:13:39.986: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6833' May 25 22:13:40.230: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 25 22:13:40.230: INFO: stdout: "service \"agnhost-slave\" force deleted\n" STEP: using delete to clean up resources May 25 22:13:40.230: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6833' May 25 22:13:40.379: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 25 22:13:40.379: INFO: stdout: "service \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources May 25 22:13:40.379: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6833' May 25 22:13:40.498: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 25 22:13:40.498: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources May 25 22:13:40.498: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6833' May 25 22:13:40.605: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 25 22:13:40.605: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources May 25 22:13:40.606: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6833' May 25 22:13:40.733: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 25 22:13:40.733: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources May 25 22:13:40.734: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6833' May 25 22:13:40.838: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 25 22:13:40.838: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 22:13:40.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6833" for this suite. • [SLOW TEST:14.523 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:380 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":278,"completed":231,"skipped":3749,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 22:13:40.848: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service externalname-service with the type=ExternalName in namespace services-5023 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-5023 I0525 22:13:41.594239 6 runners.go:189] Created replication controller with name: externalname-service, namespace: services-5023, replica count: 2 I0525 22:13:44.644713 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0525 22:13:47.644965 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 25 22:13:47.645: INFO: Creating new exec pod May 25 22:13:52.690: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-5023 execpodkxhp7 -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' May 25 22:13:52.914: INFO: stderr: "I0525 22:13:52.826970 3905 log.go:172] (0xc000a5c0b0) (0xc00040b4a0) Create stream\nI0525 22:13:52.827033 3905 log.go:172] (0xc000a5c0b0) (0xc00040b4a0) Stream added, broadcasting: 1\nI0525 22:13:52.829440 3905 log.go:172] (0xc000a5c0b0) Reply frame received for 1\nI0525 22:13:52.829504 3905 log.go:172] (0xc000a5c0b0) (0xc0009ce000) Create stream\nI0525 22:13:52.829524 3905 log.go:172] (0xc000a5c0b0) (0xc0009ce000) Stream added, broadcasting: 3\nI0525 22:13:52.830394 3905 log.go:172] (0xc000a5c0b0) Reply frame received for 3\nI0525 22:13:52.830425 3905 log.go:172] (0xc000a5c0b0) (0xc0006b3a40) Create stream\nI0525 22:13:52.830433 3905 log.go:172] (0xc000a5c0b0) (0xc0006b3a40) Stream added, broadcasting: 5\nI0525 22:13:52.831168 3905 log.go:172] (0xc000a5c0b0) Reply frame received for 5\nI0525 22:13:52.905746 3905 log.go:172] (0xc000a5c0b0) Data frame received for 5\nI0525 22:13:52.905786 3905 log.go:172] (0xc0006b3a40) (5) Data frame handling\nI0525 22:13:52.905820 3905 log.go:172] (0xc0006b3a40) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0525 22:13:52.906793 3905 log.go:172] (0xc000a5c0b0) Data frame received for 5\nI0525 22:13:52.906822 3905 log.go:172] (0xc0006b3a40) (5) Data frame handling\nI0525 22:13:52.906862 3905 log.go:172] (0xc0006b3a40) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0525 22:13:52.907287 3905 log.go:172] (0xc000a5c0b0) Data frame received for 3\nI0525 22:13:52.907311 3905 log.go:172] (0xc0009ce000) (3) Data frame handling\nI0525 22:13:52.907353 3905 log.go:172] (0xc000a5c0b0) Data frame received for 5\nI0525 22:13:52.907384 3905 log.go:172] (0xc0006b3a40) (5) Data frame handling\nI0525 22:13:52.908954 3905 log.go:172] (0xc000a5c0b0) Data frame received for 1\nI0525 22:13:52.908971 3905 log.go:172] (0xc00040b4a0) (1) Data frame handling\nI0525 22:13:52.908980 3905 log.go:172] (0xc00040b4a0) (1) Data frame sent\nI0525 22:13:52.908988 3905 log.go:172] (0xc000a5c0b0) (0xc00040b4a0) Stream removed, broadcasting: 1\nI0525 22:13:52.909484 3905 log.go:172] (0xc000a5c0b0) Go away received\nI0525 22:13:52.909672 3905 log.go:172] (0xc000a5c0b0) (0xc00040b4a0) Stream removed, broadcasting: 1\nI0525 22:13:52.909698 3905 log.go:172] (0xc000a5c0b0) (0xc0009ce000) Stream removed, broadcasting: 3\nI0525 22:13:52.909712 3905 log.go:172] (0xc000a5c0b0) (0xc0006b3a40) Stream removed, broadcasting: 5\n" May 25 22:13:52.914: INFO: stdout: "" May 25 22:13:52.915: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-5023 execpodkxhp7 -- /bin/sh -x -c nc -zv -t -w 2 10.108.16.143 80' May 25 22:13:53.092: INFO: stderr: "I0525 22:13:53.035678 3927 log.go:172] (0xc0000f4fd0) (0xc0008c2280) Create stream\nI0525 22:13:53.035735 3927 log.go:172] (0xc0000f4fd0) (0xc0008c2280) Stream added, broadcasting: 1\nI0525 22:13:53.037522 3927 log.go:172] (0xc0000f4fd0) Reply frame received for 1\nI0525 22:13:53.037556 3927 log.go:172] (0xc0000f4fd0) (0xc0007aa0a0) Create stream\nI0525 22:13:53.037571 3927 log.go:172] (0xc0000f4fd0) (0xc0007aa0a0) Stream added, broadcasting: 3\nI0525 22:13:53.038331 3927 log.go:172] (0xc0000f4fd0) Reply frame received for 3\nI0525 22:13:53.038372 3927 log.go:172] (0xc0000f4fd0) (0xc0006f55e0) Create stream\nI0525 22:13:53.038397 3927 log.go:172] (0xc0000f4fd0) (0xc0006f55e0) Stream added, broadcasting: 5\nI0525 22:13:53.039088 3927 log.go:172] (0xc0000f4fd0) Reply frame received for 5\nI0525 22:13:53.084017 3927 log.go:172] (0xc0000f4fd0) Data frame received for 3\nI0525 22:13:53.084055 3927 log.go:172] (0xc0007aa0a0) (3) Data frame handling\nI0525 22:13:53.084362 3927 log.go:172] (0xc0000f4fd0) Data frame received for 5\nI0525 22:13:53.084400 3927 log.go:172] (0xc0006f55e0) (5) Data frame handling\nI0525 22:13:53.084420 3927 log.go:172] (0xc0006f55e0) (5) Data frame sent\n+ nc -zv -t -w 2 10.108.16.143 80\nConnection to 10.108.16.143 80 port [tcp/http] succeeded!\nI0525 22:13:53.084435 3927 log.go:172] (0xc0000f4fd0) Data frame received for 5\nI0525 22:13:53.084469 3927 log.go:172] (0xc0006f55e0) (5) Data frame handling\nI0525 22:13:53.086393 3927 log.go:172] (0xc0000f4fd0) Data frame received for 1\nI0525 22:13:53.086423 3927 log.go:172] (0xc0008c2280) (1) Data frame handling\nI0525 22:13:53.086439 3927 log.go:172] (0xc0008c2280) (1) Data frame sent\nI0525 22:13:53.086472 3927 log.go:172] (0xc0000f4fd0) (0xc0008c2280) Stream removed, broadcasting: 1\nI0525 22:13:53.086551 3927 log.go:172] (0xc0000f4fd0) Go away received\nI0525 22:13:53.086842 3927 log.go:172] (0xc0000f4fd0) (0xc0008c2280) Stream removed, broadcasting: 1\nI0525 22:13:53.086859 3927 log.go:172] (0xc0000f4fd0) (0xc0007aa0a0) Stream removed, broadcasting: 3\nI0525 22:13:53.086868 3927 log.go:172] (0xc0000f4fd0) (0xc0006f55e0) Stream removed, broadcasting: 5\n" May 25 22:13:53.092: INFO: stdout: "" May 25 22:13:53.092: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 22:13:53.136: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5023" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:12.310 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":278,"completed":232,"skipped":3771,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 22:13:53.158: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook May 25 22:14:01.322: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 25 22:14:01.352: INFO: Pod pod-with-poststart-http-hook still exists May 25 22:14:03.352: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 25 22:14:03.373: INFO: Pod pod-with-poststart-http-hook still exists May 25 22:14:05.352: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 25 22:14:05.357: INFO: Pod pod-with-poststart-http-hook still exists May 25 22:14:07.352: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 25 22:14:07.357: INFO: Pod pod-with-poststart-http-hook still exists May 25 22:14:09.352: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 25 22:14:09.356: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 22:14:09.356: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-3566" for this suite. • [SLOW TEST:16.205 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":278,"completed":233,"skipped":3804,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 22:14:09.363: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars May 25 22:14:09.491: INFO: Waiting up to 5m0s for pod "downward-api-0adaf3f4-458b-4d42-a057-2e152169b2f0" in namespace "downward-api-8026" to be "success or failure" May 25 22:14:09.501: INFO: Pod "downward-api-0adaf3f4-458b-4d42-a057-2e152169b2f0": Phase="Pending", Reason="", readiness=false. Elapsed: 9.930739ms May 25 22:14:11.565: INFO: Pod "downward-api-0adaf3f4-458b-4d42-a057-2e152169b2f0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.073212453s May 25 22:14:13.569: INFO: Pod "downward-api-0adaf3f4-458b-4d42-a057-2e152169b2f0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.077956474s STEP: Saw pod success May 25 22:14:13.569: INFO: Pod "downward-api-0adaf3f4-458b-4d42-a057-2e152169b2f0" satisfied condition "success or failure" May 25 22:14:13.573: INFO: Trying to get logs from node jerma-worker2 pod downward-api-0adaf3f4-458b-4d42-a057-2e152169b2f0 container dapi-container: STEP: delete the pod May 25 22:14:13.599: INFO: Waiting for pod downward-api-0adaf3f4-458b-4d42-a057-2e152169b2f0 to disappear May 25 22:14:13.696: INFO: Pod downward-api-0adaf3f4-458b-4d42-a057-2e152169b2f0 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 22:14:13.696: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8026" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":278,"completed":234,"skipped":3820,"failed":0} SSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 22:14:13.705: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 25 22:14:13.758: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9139bac5-bca9-4843-881b-bc917f34d531" in namespace "downward-api-2906" to be "success or failure" May 25 22:14:13.774: INFO: Pod "downwardapi-volume-9139bac5-bca9-4843-881b-bc917f34d531": Phase="Pending", Reason="", readiness=false. Elapsed: 15.16307ms May 25 22:14:15.798: INFO: Pod "downwardapi-volume-9139bac5-bca9-4843-881b-bc917f34d531": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039795504s May 25 22:14:17.802: INFO: Pod "downwardapi-volume-9139bac5-bca9-4843-881b-bc917f34d531": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.043529624s STEP: Saw pod success May 25 22:14:17.802: INFO: Pod "downwardapi-volume-9139bac5-bca9-4843-881b-bc917f34d531" satisfied condition "success or failure" May 25 22:14:17.805: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-9139bac5-bca9-4843-881b-bc917f34d531 container client-container: STEP: delete the pod May 25 22:14:17.895: INFO: Waiting for pod downwardapi-volume-9139bac5-bca9-4843-881b-bc917f34d531 to disappear May 25 22:14:17.911: INFO: Pod downwardapi-volume-9139bac5-bca9-4843-881b-bc917f34d531 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 22:14:17.911: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2906" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":278,"completed":235,"skipped":3824,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 22:14:17.919: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted May 25 22:14:25.328: INFO: 9 pods remaining May 25 22:14:25.328: INFO: 0 pods has nil DeletionTimestamp May 25 22:14:25.328: INFO: May 25 22:14:27.002: INFO: 0 pods remaining May 25 22:14:27.002: INFO: 0 pods has nil DeletionTimestamp May 25 22:14:27.002: INFO: May 25 22:14:27.901: INFO: 0 pods remaining May 25 22:14:27.901: INFO: 0 pods has nil DeletionTimestamp May 25 22:14:27.901: INFO: STEP: Gathering metrics W0525 22:14:29.224480 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 25 22:14:29.224: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 22:14:29.224: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7321" for this suite. • [SLOW TEST:11.312 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":278,"completed":236,"skipped":3870,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 22:14:29.230: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 22:15:04.284: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-6078" for this suite. • [SLOW TEST:35.062 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":278,"completed":237,"skipped":3882,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 22:15:04.293: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 25 22:15:08.496: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 22:15:08.549: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-4341" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":238,"skipped":3927,"failed":0} SSSS ------------------------------ [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 22:15:08.558: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:324 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a replication controller May 25 22:15:08.674: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7923' May 25 22:15:11.968: INFO: stderr: "" May 25 22:15:11.968: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 25 22:15:11.968: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7923' May 25 22:15:12.094: INFO: stderr: "" May 25 22:15:12.094: INFO: stdout: "update-demo-nautilus-g7mlp update-demo-nautilus-vxqz4 " May 25 22:15:12.094: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-g7mlp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7923' May 25 22:15:12.192: INFO: stderr: "" May 25 22:15:12.192: INFO: stdout: "" May 25 22:15:12.192: INFO: update-demo-nautilus-g7mlp is created but not running May 25 22:15:17.193: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7923' May 25 22:15:17.295: INFO: stderr: "" May 25 22:15:17.296: INFO: stdout: "update-demo-nautilus-g7mlp update-demo-nautilus-vxqz4 " May 25 22:15:17.296: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-g7mlp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7923' May 25 22:15:17.397: INFO: stderr: "" May 25 22:15:17.397: INFO: stdout: "true" May 25 22:15:17.397: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-g7mlp -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7923' May 25 22:15:17.478: INFO: stderr: "" May 25 22:15:17.478: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 25 22:15:17.478: INFO: validating pod update-demo-nautilus-g7mlp May 25 22:15:17.481: INFO: got data: { "image": "nautilus.jpg" } May 25 22:15:17.481: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 25 22:15:17.481: INFO: update-demo-nautilus-g7mlp is verified up and running May 25 22:15:17.481: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vxqz4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7923' May 25 22:15:17.571: INFO: stderr: "" May 25 22:15:17.571: INFO: stdout: "true" May 25 22:15:17.571: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vxqz4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7923' May 25 22:15:17.654: INFO: stderr: "" May 25 22:15:17.654: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 25 22:15:17.654: INFO: validating pod update-demo-nautilus-vxqz4 May 25 22:15:17.658: INFO: got data: { "image": "nautilus.jpg" } May 25 22:15:17.658: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 25 22:15:17.658: INFO: update-demo-nautilus-vxqz4 is verified up and running STEP: scaling down the replication controller May 25 22:15:17.660: INFO: scanned /root for discovery docs: May 25 22:15:17.660: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-7923' May 25 22:15:18.772: INFO: stderr: "" May 25 22:15:18.772: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 25 22:15:18.772: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7923' May 25 22:15:18.871: INFO: stderr: "" May 25 22:15:18.871: INFO: stdout: "update-demo-nautilus-g7mlp update-demo-nautilus-vxqz4 " STEP: Replicas for name=update-demo: expected=1 actual=2 May 25 22:15:23.871: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7923' May 25 22:15:23.967: INFO: stderr: "" May 25 22:15:23.967: INFO: stdout: "update-demo-nautilus-g7mlp update-demo-nautilus-vxqz4 " STEP: Replicas for name=update-demo: expected=1 actual=2 May 25 22:15:28.968: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7923' May 25 22:15:29.077: INFO: stderr: "" May 25 22:15:29.077: INFO: stdout: "update-demo-nautilus-g7mlp update-demo-nautilus-vxqz4 " STEP: Replicas for name=update-demo: expected=1 actual=2 May 25 22:15:34.078: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7923' May 25 22:15:34.176: INFO: stderr: "" May 25 22:15:34.176: INFO: stdout: "update-demo-nautilus-g7mlp " May 25 22:15:34.176: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-g7mlp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7923' May 25 22:15:34.269: INFO: stderr: "" May 25 22:15:34.269: INFO: stdout: "true" May 25 22:15:34.269: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-g7mlp -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7923' May 25 22:15:34.369: INFO: stderr: "" May 25 22:15:34.369: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 25 22:15:34.369: INFO: validating pod update-demo-nautilus-g7mlp May 25 22:15:34.372: INFO: got data: { "image": "nautilus.jpg" } May 25 22:15:34.372: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 25 22:15:34.372: INFO: update-demo-nautilus-g7mlp is verified up and running STEP: scaling up the replication controller May 25 22:15:34.374: INFO: scanned /root for discovery docs: May 25 22:15:34.374: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-7923' May 25 22:15:35.499: INFO: stderr: "" May 25 22:15:35.499: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 25 22:15:35.499: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7923' May 25 22:15:35.608: INFO: stderr: "" May 25 22:15:35.608: INFO: stdout: "update-demo-nautilus-g7mlp update-demo-nautilus-n8lcd " May 25 22:15:35.608: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-g7mlp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7923' May 25 22:15:35.699: INFO: stderr: "" May 25 22:15:35.699: INFO: stdout: "true" May 25 22:15:35.699: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-g7mlp -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7923' May 25 22:15:35.795: INFO: stderr: "" May 25 22:15:35.795: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 25 22:15:35.795: INFO: validating pod update-demo-nautilus-g7mlp May 25 22:15:35.799: INFO: got data: { "image": "nautilus.jpg" } May 25 22:15:35.799: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 25 22:15:35.799: INFO: update-demo-nautilus-g7mlp is verified up and running May 25 22:15:35.799: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-n8lcd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7923' May 25 22:15:35.896: INFO: stderr: "" May 25 22:15:35.896: INFO: stdout: "" May 25 22:15:35.896: INFO: update-demo-nautilus-n8lcd is created but not running May 25 22:15:40.896: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7923' May 25 22:15:41.007: INFO: stderr: "" May 25 22:15:41.007: INFO: stdout: "update-demo-nautilus-g7mlp update-demo-nautilus-n8lcd " May 25 22:15:41.007: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-g7mlp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7923' May 25 22:15:41.096: INFO: stderr: "" May 25 22:15:41.097: INFO: stdout: "true" May 25 22:15:41.097: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-g7mlp -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7923' May 25 22:15:41.186: INFO: stderr: "" May 25 22:15:41.186: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 25 22:15:41.186: INFO: validating pod update-demo-nautilus-g7mlp May 25 22:15:41.190: INFO: got data: { "image": "nautilus.jpg" } May 25 22:15:41.190: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 25 22:15:41.190: INFO: update-demo-nautilus-g7mlp is verified up and running May 25 22:15:41.190: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-n8lcd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7923' May 25 22:15:41.296: INFO: stderr: "" May 25 22:15:41.296: INFO: stdout: "true" May 25 22:15:41.296: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-n8lcd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7923' May 25 22:15:41.389: INFO: stderr: "" May 25 22:15:41.389: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 25 22:15:41.389: INFO: validating pod update-demo-nautilus-n8lcd May 25 22:15:41.393: INFO: got data: { "image": "nautilus.jpg" } May 25 22:15:41.393: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 25 22:15:41.393: INFO: update-demo-nautilus-n8lcd is verified up and running STEP: using delete to clean up resources May 25 22:15:41.393: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7923' May 25 22:15:41.485: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 25 22:15:41.485: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 25 22:15:41.486: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-7923' May 25 22:15:41.586: INFO: stderr: "No resources found in kubectl-7923 namespace.\n" May 25 22:15:41.586: INFO: stdout: "" May 25 22:15:41.586: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-7923 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 25 22:15:41.680: INFO: stderr: "" May 25 22:15:41.681: INFO: stdout: "update-demo-nautilus-g7mlp\nupdate-demo-nautilus-n8lcd\n" May 25 22:15:42.181: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-7923' May 25 22:15:42.343: INFO: stderr: "No resources found in kubectl-7923 namespace.\n" May 25 22:15:42.343: INFO: stdout: "" May 25 22:15:42.343: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-7923 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 25 22:15:42.503: INFO: stderr: "" May 25 22:15:42.503: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 22:15:42.503: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7923" for this suite. • [SLOW TEST:34.166 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:322 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":278,"completed":239,"skipped":3931,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 22:15:42.724: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1525 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 25 22:15:42.984: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-5917' May 25 22:15:43.089: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 25 22:15:43.089: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n" STEP: verifying the rc e2e-test-httpd-rc was created STEP: verifying the pod controlled by rc e2e-test-httpd-rc was created STEP: confirm that you can get logs from an rc May 25 22:15:43.115: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-httpd-rc-96wqw] May 25 22:15:43.115: INFO: Waiting up to 5m0s for pod "e2e-test-httpd-rc-96wqw" in namespace "kubectl-5917" to be "running and ready" May 25 22:15:43.148: INFO: Pod "e2e-test-httpd-rc-96wqw": Phase="Pending", Reason="", readiness=false. Elapsed: 33.765087ms May 25 22:15:45.153: INFO: Pod "e2e-test-httpd-rc-96wqw": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038404577s May 25 22:15:47.157: INFO: Pod "e2e-test-httpd-rc-96wqw": Phase="Running", Reason="", readiness=true. Elapsed: 4.042744661s May 25 22:15:47.157: INFO: Pod "e2e-test-httpd-rc-96wqw" satisfied condition "running and ready" May 25 22:15:47.157: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-httpd-rc-96wqw] May 25 22:15:47.158: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-httpd-rc --namespace=kubectl-5917' May 25 22:15:47.279: INFO: stderr: "" May 25 22:15:47.279: INFO: stdout: "AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.2.8. Set the 'ServerName' directive globally to suppress this message\nAH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.2.8. Set the 'ServerName' directive globally to suppress this message\n[Mon May 25 22:15:45.639839 2020] [mpm_event:notice] [pid 1:tid 140492011961192] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations\n[Mon May 25 22:15:45.639890 2020] [core:notice] [pid 1:tid 140492011961192] AH00094: Command line: 'httpd -D FOREGROUND'\n" [AfterEach] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1530 May 25 22:15:47.279: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-5917' May 25 22:15:47.388: INFO: stderr: "" May 25 22:15:47.388: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 22:15:47.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5917" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run rc should create an rc from an image [Conformance]","total":278,"completed":240,"skipped":3968,"failed":0} SSSS ------------------------------ [k8s.io] Lease lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 22:15:47.416: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 22:15:47.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-8138" for this suite. •{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":278,"completed":241,"skipped":3972,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 22:15:47.703: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-424acdfa-ed4e-4e23-9d5e-31c2e8ccfec3 STEP: Creating a pod to test consume configMaps May 25 22:15:47.776: INFO: Waiting up to 5m0s for pod "pod-configmaps-77312308-55c9-4381-bd87-9d3b6278f4f8" in namespace "configmap-7834" to be "success or failure" May 25 22:15:47.779: INFO: Pod "pod-configmaps-77312308-55c9-4381-bd87-9d3b6278f4f8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.791969ms May 25 22:15:49.831: INFO: Pod "pod-configmaps-77312308-55c9-4381-bd87-9d3b6278f4f8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054414083s May 25 22:15:52.057: INFO: Pod "pod-configmaps-77312308-55c9-4381-bd87-9d3b6278f4f8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.281011197s STEP: Saw pod success May 25 22:15:52.057: INFO: Pod "pod-configmaps-77312308-55c9-4381-bd87-9d3b6278f4f8" satisfied condition "success or failure" May 25 22:15:52.060: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-77312308-55c9-4381-bd87-9d3b6278f4f8 container configmap-volume-test: STEP: delete the pod May 25 22:15:52.143: INFO: Waiting for pod pod-configmaps-77312308-55c9-4381-bd87-9d3b6278f4f8 to disappear May 25 22:15:52.188: INFO: Pod pod-configmaps-77312308-55c9-4381-bd87-9d3b6278f4f8 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 22:15:52.189: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7834" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":242,"skipped":3988,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 22:15:52.196: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 25 22:15:52.257: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-7cb8547c-f1b2-4b0e-8835-5297d0427be6" in namespace "security-context-test-5813" to be "success or failure" May 25 22:15:52.272: INFO: Pod "alpine-nnp-false-7cb8547c-f1b2-4b0e-8835-5297d0427be6": Phase="Pending", Reason="", readiness=false. Elapsed: 14.680864ms May 25 22:15:54.276: INFO: Pod "alpine-nnp-false-7cb8547c-f1b2-4b0e-8835-5297d0427be6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018953792s May 25 22:15:56.280: INFO: Pod "alpine-nnp-false-7cb8547c-f1b2-4b0e-8835-5297d0427be6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022933433s May 25 22:15:56.280: INFO: Pod "alpine-nnp-false-7cb8547c-f1b2-4b0e-8835-5297d0427be6" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 22:15:56.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-5813" for this suite. •{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":243,"skipped":4009,"failed":0} SSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 22:15:56.293: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod May 25 22:16:01.044: INFO: Successfully updated pod "annotationupdatea36ac244-7d81-4baa-860d-4d6d99c5f16c" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 22:16:03.090: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9617" for this suite. • [SLOW TEST:6.812 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":244,"skipped":4013,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 22:16:03.105: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 25 22:16:03.797: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 25 22:16:05.840: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726041763, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726041763, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726041763, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726041763, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 25 22:16:08.869: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 22:16:09.115: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-180" for this suite. STEP: Destroying namespace "webhook-180-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.151 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":278,"completed":245,"skipped":4048,"failed":0} S ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 22:16:09.256: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name cm-test-opt-del-58f6d294-5e92-4915-bc7a-bd8684c0fd6d STEP: Creating configMap with name cm-test-opt-upd-22f0ab31-48c2-48e9-bb32-7f541faefa8d STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-58f6d294-5e92-4915-bc7a-bd8684c0fd6d STEP: Updating configmap cm-test-opt-upd-22f0ab31-48c2-48e9-bb32-7f541faefa8d STEP: Creating configMap with name cm-test-opt-create-a21eda70-ecfc-4751-8581-dcc6abbeffde STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 22:16:19.895: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9523" for this suite. • [SLOW TEST:10.648 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":246,"skipped":4049,"failed":0} SSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 22:16:19.904: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 25 22:16:19.978: INFO: (0) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 5.221847ms) May 25 22:16:19.983: INFO: (1) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 4.781657ms) May 25 22:16:19.987: INFO: (2) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 4.102348ms) May 25 22:16:19.991: INFO: (3) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.445664ms) May 25 22:16:19.995: INFO: (4) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.776879ms) May 25 22:16:19.998: INFO: (5) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.248632ms) May 25 22:16:20.022: INFO: (6) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 24.145933ms) May 25 22:16:20.026: INFO: (7) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.460884ms) May 25 22:16:20.030: INFO: (8) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.856965ms) May 25 22:16:20.033: INFO: (9) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.750047ms) May 25 22:16:20.037: INFO: (10) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.899284ms) May 25 22:16:20.040: INFO: (11) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 2.949057ms) May 25 22:16:20.044: INFO: (12) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.58795ms) May 25 22:16:20.047: INFO: (13) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.197862ms) May 25 22:16:20.051: INFO: (14) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.791483ms) May 25 22:16:20.055: INFO: (15) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.626584ms) May 25 22:16:20.058: INFO: (16) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 2.826607ms) May 25 22:16:20.060: INFO: (17) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 2.496381ms) May 25 22:16:20.064: INFO: (18) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.704762ms) May 25 22:16:20.067: INFO: (19) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 2.935977ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 22:16:20.067: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-344" for this suite. •{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance]","total":278,"completed":247,"skipped":4058,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 22:16:20.074: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 May 25 22:16:20.167: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 25 22:16:20.179: INFO: Waiting for terminating namespaces to be deleted... May 25 22:16:20.181: INFO: Logging pods the kubelet thinks is on node jerma-worker before test May 25 22:16:20.187: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 25 22:16:20.187: INFO: Container kindnet-cni ready: true, restart count 0 May 25 22:16:20.187: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 25 22:16:20.187: INFO: Container kube-proxy ready: true, restart count 0 May 25 22:16:20.187: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test May 25 22:16:20.193: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) May 25 22:16:20.193: INFO: Container kube-bench ready: false, restart count 0 May 25 22:16:20.193: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 25 22:16:20.193: INFO: Container kindnet-cni ready: true, restart count 0 May 25 22:16:20.193: INFO: pod-configmaps-6af21c95-f26e-47d5-910d-ebc26b780c1f from configmap-9523 started at 2020-05-25 22:16:10 +0000 UTC (3 container statuses recorded) May 25 22:16:20.193: INFO: Container createcm-volume-test ready: true, restart count 0 May 25 22:16:20.193: INFO: Container delcm-volume-test ready: true, restart count 0 May 25 22:16:20.193: INFO: Container updcm-volume-test ready: true, restart count 0 May 25 22:16:20.193: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 25 22:16:20.193: INFO: Container kube-proxy ready: true, restart count 0 May 25 22:16:20.193: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) May 25 22:16:20.193: INFO: Container kube-hunter ready: false, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.16126548704ae106], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] STEP: Considering event: Type = [Warning], Name = [restricted-pod.161265487118c0d4], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 22:16:21.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-1302" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":278,"completed":248,"skipped":4094,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 22:16:21.224: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: validating cluster-info May 25 22:16:21.298: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' May 25 22:16:21.395: INFO: stderr: "" May 25 22:16:21.396: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32770\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32770/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 22:16:21.396: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5522" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance]","total":278,"completed":249,"skipped":4122,"failed":0} SSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 22:16:21.402: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1754 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 25 22:16:21.477: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-1239' May 25 22:16:21.590: INFO: stderr: "" May 25 22:16:21.590: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1759 May 25 22:16:21.593: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-1239' May 25 22:16:29.240: INFO: stderr: "" May 25 22:16:29.240: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 22:16:29.240: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1239" for this suite. • [SLOW TEST:7.870 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1750 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":278,"completed":250,"skipped":4129,"failed":0} [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 22:16:29.273: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 22:16:45.363: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-7664" for this suite. • [SLOW TEST:16.098 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":278,"completed":251,"skipped":4129,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 22:16:45.371: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-8084 [It] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating statefulset ss in namespace statefulset-8084 May 25 22:16:45.508: INFO: Found 0 stateful pods, waiting for 1 May 25 22:16:55.512: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 May 25 22:16:55.548: INFO: Deleting all statefulset in ns statefulset-8084 May 25 22:16:55.584: INFO: Scaling statefulset ss to 0 May 25 22:17:15.630: INFO: Waiting for statefulset status.replicas updated to 0 May 25 22:17:15.633: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 22:17:15.654: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-8084" for this suite. • [SLOW TEST:30.290 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":278,"completed":252,"skipped":4168,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 22:17:15.662: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 22:17:19.750: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-4453" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":253,"skipped":4201,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 22:17:19.758: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-b104839e-ea9f-499f-9c7a-ec1930f7f78d STEP: Creating a pod to test consume configMaps May 25 22:17:19.849: INFO: Waiting up to 5m0s for pod "pod-configmaps-f8f1247b-e8c1-4052-93dc-2779b9f15c3a" in namespace "configmap-3125" to be "success or failure" May 25 22:17:19.851: INFO: Pod "pod-configmaps-f8f1247b-e8c1-4052-93dc-2779b9f15c3a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.768594ms May 25 22:17:22.017: INFO: Pod "pod-configmaps-f8f1247b-e8c1-4052-93dc-2779b9f15c3a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.168340333s May 25 22:17:24.022: INFO: Pod "pod-configmaps-f8f1247b-e8c1-4052-93dc-2779b9f15c3a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.173304326s STEP: Saw pod success May 25 22:17:24.022: INFO: Pod "pod-configmaps-f8f1247b-e8c1-4052-93dc-2779b9f15c3a" satisfied condition "success or failure" May 25 22:17:24.025: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-f8f1247b-e8c1-4052-93dc-2779b9f15c3a container configmap-volume-test: STEP: delete the pod May 25 22:17:24.051: INFO: Waiting for pod pod-configmaps-f8f1247b-e8c1-4052-93dc-2779b9f15c3a to disappear May 25 22:17:24.072: INFO: Pod pod-configmaps-f8f1247b-e8c1-4052-93dc-2779b9f15c3a no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 22:17:24.072: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3125" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":254,"skipped":4210,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 22:17:24.079: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-8139 STEP: creating a selector STEP: Creating the service pods in kubernetes May 25 22:17:24.160: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 25 22:17:46.344: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.222:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-8139 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 25 22:17:46.344: INFO: >>> kubeConfig: /root/.kube/config I0525 22:17:46.382094 6 log.go:172] (0xc0016e06e0) (0xc0021ccaa0) Create stream I0525 22:17:46.382125 6 log.go:172] (0xc0016e06e0) (0xc0021ccaa0) Stream added, broadcasting: 1 I0525 22:17:46.384012 6 log.go:172] (0xc0016e06e0) Reply frame received for 1 I0525 22:17:46.384053 6 log.go:172] (0xc0016e06e0) (0xc0022403c0) Create stream I0525 22:17:46.384069 6 log.go:172] (0xc0016e06e0) (0xc0022403c0) Stream added, broadcasting: 3 I0525 22:17:46.384891 6 log.go:172] (0xc0016e06e0) Reply frame received for 3 I0525 22:17:46.384920 6 log.go:172] (0xc0016e06e0) (0xc001d81680) Create stream I0525 22:17:46.384930 6 log.go:172] (0xc0016e06e0) (0xc001d81680) Stream added, broadcasting: 5 I0525 22:17:46.385995 6 log.go:172] (0xc0016e06e0) Reply frame received for 5 I0525 22:17:46.470504 6 log.go:172] (0xc0016e06e0) Data frame received for 3 I0525 22:17:46.470540 6 log.go:172] (0xc0022403c0) (3) Data frame handling I0525 22:17:46.470578 6 log.go:172] (0xc0022403c0) (3) Data frame sent I0525 22:17:46.470588 6 log.go:172] (0xc0016e06e0) Data frame received for 3 I0525 22:17:46.470612 6 log.go:172] (0xc0022403c0) (3) Data frame handling I0525 22:17:46.471129 6 log.go:172] (0xc0016e06e0) Data frame received for 5 I0525 22:17:46.471165 6 log.go:172] (0xc001d81680) (5) Data frame handling I0525 22:17:46.472495 6 log.go:172] (0xc0016e06e0) Data frame received for 1 I0525 22:17:46.472518 6 log.go:172] (0xc0021ccaa0) (1) Data frame handling I0525 22:17:46.472532 6 log.go:172] (0xc0021ccaa0) (1) Data frame sent I0525 22:17:46.472559 6 log.go:172] (0xc0016e06e0) (0xc0021ccaa0) Stream removed, broadcasting: 1 I0525 22:17:46.472581 6 log.go:172] (0xc0016e06e0) Go away received I0525 22:17:46.472679 6 log.go:172] (0xc0016e06e0) (0xc0021ccaa0) Stream removed, broadcasting: 1 I0525 22:17:46.472712 6 log.go:172] (0xc0016e06e0) (0xc0022403c0) Stream removed, broadcasting: 3 I0525 22:17:46.472738 6 log.go:172] (0xc0016e06e0) (0xc001d81680) Stream removed, broadcasting: 5 May 25 22:17:46.472: INFO: Found all expected endpoints: [netserver-0] May 25 22:17:46.475: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.13:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-8139 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 25 22:17:46.475: INFO: >>> kubeConfig: /root/.kube/config I0525 22:17:46.510075 6 log.go:172] (0xc001678790) (0xc001d81f40) Create stream I0525 22:17:46.510127 6 log.go:172] (0xc001678790) (0xc001d81f40) Stream added, broadcasting: 1 I0525 22:17:46.517577 6 log.go:172] (0xc001678790) Reply frame received for 1 I0525 22:17:46.517633 6 log.go:172] (0xc001678790) (0xc0016c77c0) Create stream I0525 22:17:46.517652 6 log.go:172] (0xc001678790) (0xc0016c77c0) Stream added, broadcasting: 3 I0525 22:17:46.519059 6 log.go:172] (0xc001678790) Reply frame received for 3 I0525 22:17:46.519111 6 log.go:172] (0xc001678790) (0xc0016c7cc0) Create stream I0525 22:17:46.519131 6 log.go:172] (0xc001678790) (0xc0016c7cc0) Stream added, broadcasting: 5 I0525 22:17:46.520159 6 log.go:172] (0xc001678790) Reply frame received for 5 I0525 22:17:46.574630 6 log.go:172] (0xc001678790) Data frame received for 5 I0525 22:17:46.574670 6 log.go:172] (0xc0016c7cc0) (5) Data frame handling I0525 22:17:46.574691 6 log.go:172] (0xc001678790) Data frame received for 3 I0525 22:17:46.574707 6 log.go:172] (0xc0016c77c0) (3) Data frame handling I0525 22:17:46.574714 6 log.go:172] (0xc0016c77c0) (3) Data frame sent I0525 22:17:46.574720 6 log.go:172] (0xc001678790) Data frame received for 3 I0525 22:17:46.574724 6 log.go:172] (0xc0016c77c0) (3) Data frame handling I0525 22:17:46.576215 6 log.go:172] (0xc001678790) Data frame received for 1 I0525 22:17:46.576255 6 log.go:172] (0xc001d81f40) (1) Data frame handling I0525 22:17:46.576277 6 log.go:172] (0xc001d81f40) (1) Data frame sent I0525 22:17:46.576302 6 log.go:172] (0xc001678790) (0xc001d81f40) Stream removed, broadcasting: 1 I0525 22:17:46.576324 6 log.go:172] (0xc001678790) Go away received I0525 22:17:46.576440 6 log.go:172] (0xc001678790) (0xc001d81f40) Stream removed, broadcasting: 1 I0525 22:17:46.576454 6 log.go:172] (0xc001678790) (0xc0016c77c0) Stream removed, broadcasting: 3 I0525 22:17:46.576462 6 log.go:172] (0xc001678790) (0xc0016c7cc0) Stream removed, broadcasting: 5 May 25 22:17:46.576: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 22:17:46.576: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-8139" for this suite. • [SLOW TEST:22.505 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":255,"skipped":4251,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 22:17:46.585: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-3282 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a new StatefulSet May 25 22:17:46.737: INFO: Found 0 stateful pods, waiting for 3 May 25 22:17:56.742: INFO: Found 2 stateful pods, waiting for 3 May 25 22:18:06.743: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 25 22:18:06.743: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 25 22:18:06.743: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine May 25 22:18:06.771: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update May 25 22:18:16.993: INFO: Updating stateful set ss2 May 25 22:18:17.810: INFO: Waiting for Pod statefulset-3282/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 25 22:18:27.819: INFO: Waiting for Pod statefulset-3282/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Restoring Pods to the correct revision when they are deleted May 25 22:18:38.433: INFO: Found 2 stateful pods, waiting for 3 May 25 22:18:48.438: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 25 22:18:48.438: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 25 22:18:48.438: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update May 25 22:18:48.460: INFO: Updating stateful set ss2 May 25 22:18:48.552: INFO: Waiting for Pod statefulset-3282/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 25 22:18:58.560: INFO: Waiting for Pod statefulset-3282/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 25 22:19:08.577: INFO: Updating stateful set ss2 May 25 22:19:08.642: INFO: Waiting for StatefulSet statefulset-3282/ss2 to complete update May 25 22:19:08.642: INFO: Waiting for Pod statefulset-3282/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 25 22:19:18.650: INFO: Waiting for StatefulSet statefulset-3282/ss2 to complete update May 25 22:19:18.650: INFO: Waiting for Pod statefulset-3282/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 May 25 22:19:28.651: INFO: Deleting all statefulset in ns statefulset-3282 May 25 22:19:28.654: INFO: Scaling statefulset ss2 to 0 May 25 22:19:58.711: INFO: Waiting for statefulset status.replicas updated to 0 May 25 22:19:58.713: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 22:19:58.724: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-3282" for this suite. • [SLOW TEST:132.175 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":278,"completed":256,"skipped":4274,"failed":0} SSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 22:19:58.761: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 22:20:02.899: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-8427" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":278,"completed":257,"skipped":4277,"failed":0} SS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 22:20:02.907: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override all May 25 22:20:03.077: INFO: Waiting up to 5m0s for pod "client-containers-d2fbc770-94a5-498d-b6d7-1bfe905ffa3a" in namespace "containers-5833" to be "success or failure" May 25 22:20:03.080: INFO: Pod "client-containers-d2fbc770-94a5-498d-b6d7-1bfe905ffa3a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.939346ms May 25 22:20:05.600: INFO: Pod "client-containers-d2fbc770-94a5-498d-b6d7-1bfe905ffa3a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.523008654s May 25 22:20:07.605: INFO: Pod "client-containers-d2fbc770-94a5-498d-b6d7-1bfe905ffa3a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.527976294s STEP: Saw pod success May 25 22:20:07.605: INFO: Pod "client-containers-d2fbc770-94a5-498d-b6d7-1bfe905ffa3a" satisfied condition "success or failure" May 25 22:20:07.609: INFO: Trying to get logs from node jerma-worker pod client-containers-d2fbc770-94a5-498d-b6d7-1bfe905ffa3a container test-container: STEP: delete the pod May 25 22:20:07.825: INFO: Waiting for pod client-containers-d2fbc770-94a5-498d-b6d7-1bfe905ffa3a to disappear May 25 22:20:07.865: INFO: Pod client-containers-d2fbc770-94a5-498d-b6d7-1bfe905ffa3a no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 22:20:07.865: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-5833" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":278,"completed":258,"skipped":4279,"failed":0} SSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 22:20:07.874: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: executing a command with run --rm and attach with stdin May 25 22:20:08.034: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3419 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' May 25 22:20:10.817: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0525 22:20:10.765917 4703 log.go:172] (0xc0000fa420) (0xc00059db80) Create stream\nI0525 22:20:10.765988 4703 log.go:172] (0xc0000fa420) (0xc00059db80) Stream added, broadcasting: 1\nI0525 22:20:10.768727 4703 log.go:172] (0xc0000fa420) Reply frame received for 1\nI0525 22:20:10.768789 4703 log.go:172] (0xc0000fa420) (0xc000a56140) Create stream\nI0525 22:20:10.768815 4703 log.go:172] (0xc0000fa420) (0xc000a56140) Stream added, broadcasting: 3\nI0525 22:20:10.770162 4703 log.go:172] (0xc0000fa420) Reply frame received for 3\nI0525 22:20:10.770263 4703 log.go:172] (0xc0000fa420) (0xc0007ce000) Create stream\nI0525 22:20:10.770295 4703 log.go:172] (0xc0000fa420) (0xc0007ce000) Stream added, broadcasting: 5\nI0525 22:20:10.771568 4703 log.go:172] (0xc0000fa420) Reply frame received for 5\nI0525 22:20:10.771688 4703 log.go:172] (0xc0000fa420) (0xc000a56280) Create stream\nI0525 22:20:10.771728 4703 log.go:172] (0xc0000fa420) (0xc000a56280) Stream added, broadcasting: 7\nI0525 22:20:10.772699 4703 log.go:172] (0xc0000fa420) Reply frame received for 7\nI0525 22:20:10.772843 4703 log.go:172] (0xc000a56140) (3) Writing data frame\nI0525 22:20:10.772969 4703 log.go:172] (0xc000a56140) (3) Writing data frame\nI0525 22:20:10.774079 4703 log.go:172] (0xc0000fa420) Data frame received for 5\nI0525 22:20:10.774100 4703 log.go:172] (0xc0007ce000) (5) Data frame handling\nI0525 22:20:10.774118 4703 log.go:172] (0xc0007ce000) (5) Data frame sent\nI0525 22:20:10.774842 4703 log.go:172] (0xc0000fa420) Data frame received for 5\nI0525 22:20:10.774858 4703 log.go:172] (0xc0007ce000) (5) Data frame handling\nI0525 22:20:10.774867 4703 log.go:172] (0xc0007ce000) (5) Data frame sent\nI0525 22:20:10.795843 4703 log.go:172] (0xc0000fa420) Data frame received for 7\nI0525 22:20:10.795993 4703 log.go:172] (0xc000a56280) (7) Data frame handling\nI0525 22:20:10.796116 4703 log.go:172] (0xc0000fa420) Data frame received for 5\nI0525 22:20:10.796223 4703 log.go:172] (0xc0007ce000) (5) Data frame handling\nI0525 22:20:10.796675 4703 log.go:172] (0xc0000fa420) (0xc000a56140) Stream removed, broadcasting: 3\nI0525 22:20:10.796733 4703 log.go:172] (0xc0000fa420) Data frame received for 1\nI0525 22:20:10.796763 4703 log.go:172] (0xc00059db80) (1) Data frame handling\nI0525 22:20:10.796811 4703 log.go:172] (0xc00059db80) (1) Data frame sent\nI0525 22:20:10.796840 4703 log.go:172] (0xc0000fa420) (0xc00059db80) Stream removed, broadcasting: 1\nI0525 22:20:10.796867 4703 log.go:172] (0xc0000fa420) Go away received\nI0525 22:20:10.797519 4703 log.go:172] (0xc0000fa420) (0xc00059db80) Stream removed, broadcasting: 1\nI0525 22:20:10.797549 4703 log.go:172] (0xc0000fa420) (0xc000a56140) Stream removed, broadcasting: 3\nI0525 22:20:10.797567 4703 log.go:172] (0xc0000fa420) (0xc0007ce000) Stream removed, broadcasting: 5\nI0525 22:20:10.797581 4703 log.go:172] (0xc0000fa420) (0xc000a56280) Stream removed, broadcasting: 7\n" May 25 22:20:10.817: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 22:20:12.824: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3419" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job [Conformance]","total":278,"completed":259,"skipped":4290,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 22:20:12.834: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false May 25 22:20:23.003: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4164 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 25 22:20:23.004: INFO: >>> kubeConfig: /root/.kube/config I0525 22:20:23.046109 6 log.go:172] (0xc0028fe420) (0xc002752500) Create stream I0525 22:20:23.046135 6 log.go:172] (0xc0028fe420) (0xc002752500) Stream added, broadcasting: 1 I0525 22:20:23.047879 6 log.go:172] (0xc0028fe420) Reply frame received for 1 I0525 22:20:23.047915 6 log.go:172] (0xc0028fe420) (0xc0013e9f40) Create stream I0525 22:20:23.047935 6 log.go:172] (0xc0028fe420) (0xc0013e9f40) Stream added, broadcasting: 3 I0525 22:20:23.048879 6 log.go:172] (0xc0028fe420) Reply frame received for 3 I0525 22:20:23.048936 6 log.go:172] (0xc0028fe420) (0xc002930fa0) Create stream I0525 22:20:23.048958 6 log.go:172] (0xc0028fe420) (0xc002930fa0) Stream added, broadcasting: 5 I0525 22:20:23.050351 6 log.go:172] (0xc0028fe420) Reply frame received for 5 I0525 22:20:23.114664 6 log.go:172] (0xc0028fe420) Data frame received for 5 I0525 22:20:23.114697 6 log.go:172] (0xc002930fa0) (5) Data frame handling I0525 22:20:23.114729 6 log.go:172] (0xc0028fe420) Data frame received for 3 I0525 22:20:23.114769 6 log.go:172] (0xc0013e9f40) (3) Data frame handling I0525 22:20:23.114790 6 log.go:172] (0xc0013e9f40) (3) Data frame sent I0525 22:20:23.114821 6 log.go:172] (0xc0028fe420) Data frame received for 3 I0525 22:20:23.114844 6 log.go:172] (0xc0013e9f40) (3) Data frame handling I0525 22:20:23.116679 6 log.go:172] (0xc0028fe420) Data frame received for 1 I0525 22:20:23.116703 6 log.go:172] (0xc002752500) (1) Data frame handling I0525 22:20:23.116716 6 log.go:172] (0xc002752500) (1) Data frame sent I0525 22:20:23.116736 6 log.go:172] (0xc0028fe420) (0xc002752500) Stream removed, broadcasting: 1 I0525 22:20:23.116759 6 log.go:172] (0xc0028fe420) Go away received I0525 22:20:23.116869 6 log.go:172] (0xc0028fe420) (0xc002752500) Stream removed, broadcasting: 1 I0525 22:20:23.116900 6 log.go:172] (0xc0028fe420) (0xc0013e9f40) Stream removed, broadcasting: 3 I0525 22:20:23.116925 6 log.go:172] (0xc0028fe420) (0xc002930fa0) Stream removed, broadcasting: 5 May 25 22:20:23.116: INFO: Exec stderr: "" May 25 22:20:23.116: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4164 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 25 22:20:23.117: INFO: >>> kubeConfig: /root/.kube/config I0525 22:20:23.143900 6 log.go:172] (0xc0016788f0) (0xc002931a40) Create stream I0525 22:20:23.143938 6 log.go:172] (0xc0016788f0) (0xc002931a40) Stream added, broadcasting: 1 I0525 22:20:23.145747 6 log.go:172] (0xc0016788f0) Reply frame received for 1 I0525 22:20:23.145773 6 log.go:172] (0xc0016788f0) (0xc0027525a0) Create stream I0525 22:20:23.145781 6 log.go:172] (0xc0016788f0) (0xc0027525a0) Stream added, broadcasting: 3 I0525 22:20:23.146699 6 log.go:172] (0xc0016788f0) Reply frame received for 3 I0525 22:20:23.146721 6 log.go:172] (0xc0016788f0) (0xc0021cc0a0) Create stream I0525 22:20:23.146728 6 log.go:172] (0xc0016788f0) (0xc0021cc0a0) Stream added, broadcasting: 5 I0525 22:20:23.147641 6 log.go:172] (0xc0016788f0) Reply frame received for 5 I0525 22:20:23.204581 6 log.go:172] (0xc0016788f0) Data frame received for 5 I0525 22:20:23.204636 6 log.go:172] (0xc0021cc0a0) (5) Data frame handling I0525 22:20:23.204672 6 log.go:172] (0xc0016788f0) Data frame received for 3 I0525 22:20:23.204703 6 log.go:172] (0xc0027525a0) (3) Data frame handling I0525 22:20:23.204729 6 log.go:172] (0xc0027525a0) (3) Data frame sent I0525 22:20:23.204744 6 log.go:172] (0xc0016788f0) Data frame received for 3 I0525 22:20:23.204757 6 log.go:172] (0xc0027525a0) (3) Data frame handling I0525 22:20:23.207945 6 log.go:172] (0xc0016788f0) Data frame received for 1 I0525 22:20:23.207992 6 log.go:172] (0xc002931a40) (1) Data frame handling I0525 22:20:23.208018 6 log.go:172] (0xc002931a40) (1) Data frame sent I0525 22:20:23.208033 6 log.go:172] (0xc0016788f0) (0xc002931a40) Stream removed, broadcasting: 1 I0525 22:20:23.208062 6 log.go:172] (0xc0016788f0) Go away received I0525 22:20:23.208231 6 log.go:172] (0xc0016788f0) (0xc002931a40) Stream removed, broadcasting: 1 I0525 22:20:23.208266 6 log.go:172] (0xc0016788f0) (0xc0027525a0) Stream removed, broadcasting: 3 I0525 22:20:23.208286 6 log.go:172] (0xc0016788f0) (0xc0021cc0a0) Stream removed, broadcasting: 5 May 25 22:20:23.208: INFO: Exec stderr: "" May 25 22:20:23.208: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4164 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 25 22:20:23.208: INFO: >>> kubeConfig: /root/.kube/config I0525 22:20:23.236779 6 log.go:172] (0xc0016e0580) (0xc0021cc5a0) Create stream I0525 22:20:23.236804 6 log.go:172] (0xc0016e0580) (0xc0021cc5a0) Stream added, broadcasting: 1 I0525 22:20:23.238639 6 log.go:172] (0xc0016e0580) Reply frame received for 1 I0525 22:20:23.238673 6 log.go:172] (0xc0016e0580) (0xc001018820) Create stream I0525 22:20:23.238686 6 log.go:172] (0xc0016e0580) (0xc001018820) Stream added, broadcasting: 3 I0525 22:20:23.239673 6 log.go:172] (0xc0016e0580) Reply frame received for 3 I0525 22:20:23.239719 6 log.go:172] (0xc0016e0580) (0xc001018a00) Create stream I0525 22:20:23.239742 6 log.go:172] (0xc0016e0580) (0xc001018a00) Stream added, broadcasting: 5 I0525 22:20:23.240706 6 log.go:172] (0xc0016e0580) Reply frame received for 5 I0525 22:20:23.303615 6 log.go:172] (0xc0016e0580) Data frame received for 5 I0525 22:20:23.303665 6 log.go:172] (0xc001018a00) (5) Data frame handling I0525 22:20:23.303694 6 log.go:172] (0xc0016e0580) Data frame received for 3 I0525 22:20:23.303709 6 log.go:172] (0xc001018820) (3) Data frame handling I0525 22:20:23.303723 6 log.go:172] (0xc001018820) (3) Data frame sent I0525 22:20:23.303738 6 log.go:172] (0xc0016e0580) Data frame received for 3 I0525 22:20:23.303755 6 log.go:172] (0xc001018820) (3) Data frame handling I0525 22:20:23.305470 6 log.go:172] (0xc0016e0580) Data frame received for 1 I0525 22:20:23.305502 6 log.go:172] (0xc0021cc5a0) (1) Data frame handling I0525 22:20:23.305534 6 log.go:172] (0xc0021cc5a0) (1) Data frame sent I0525 22:20:23.305575 6 log.go:172] (0xc0016e0580) (0xc0021cc5a0) Stream removed, broadcasting: 1 I0525 22:20:23.305717 6 log.go:172] (0xc0016e0580) (0xc0021cc5a0) Stream removed, broadcasting: 1 I0525 22:20:23.305767 6 log.go:172] (0xc0016e0580) (0xc001018820) Stream removed, broadcasting: 3 I0525 22:20:23.305794 6 log.go:172] (0xc0016e0580) (0xc001018a00) Stream removed, broadcasting: 5 May 25 22:20:23.305: INFO: Exec stderr: "" I0525 22:20:23.305866 6 log.go:172] (0xc0016e0580) Go away received May 25 22:20:23.305: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4164 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 25 22:20:23.305: INFO: >>> kubeConfig: /root/.kube/config I0525 22:20:23.343536 6 log.go:172] (0xc0028fec60) (0xc002752c80) Create stream I0525 22:20:23.343576 6 log.go:172] (0xc0028fec60) (0xc002752c80) Stream added, broadcasting: 1 I0525 22:20:23.346115 6 log.go:172] (0xc0028fec60) Reply frame received for 1 I0525 22:20:23.346181 6 log.go:172] (0xc0028fec60) (0xc002240000) Create stream I0525 22:20:23.346204 6 log.go:172] (0xc0028fec60) (0xc002240000) Stream added, broadcasting: 3 I0525 22:20:23.347400 6 log.go:172] (0xc0028fec60) Reply frame received for 3 I0525 22:20:23.347442 6 log.go:172] (0xc0028fec60) (0xc002931e00) Create stream I0525 22:20:23.347457 6 log.go:172] (0xc0028fec60) (0xc002931e00) Stream added, broadcasting: 5 I0525 22:20:23.348536 6 log.go:172] (0xc0028fec60) Reply frame received for 5 I0525 22:20:23.427176 6 log.go:172] (0xc0028fec60) Data frame received for 5 I0525 22:20:23.427210 6 log.go:172] (0xc002931e00) (5) Data frame handling I0525 22:20:23.427247 6 log.go:172] (0xc0028fec60) Data frame received for 3 I0525 22:20:23.427276 6 log.go:172] (0xc002240000) (3) Data frame handling I0525 22:20:23.427440 6 log.go:172] (0xc002240000) (3) Data frame sent I0525 22:20:23.427592 6 log.go:172] (0xc0028fec60) Data frame received for 3 I0525 22:20:23.427609 6 log.go:172] (0xc002240000) (3) Data frame handling I0525 22:20:23.429054 6 log.go:172] (0xc0028fec60) Data frame received for 1 I0525 22:20:23.429066 6 log.go:172] (0xc002752c80) (1) Data frame handling I0525 22:20:23.429072 6 log.go:172] (0xc002752c80) (1) Data frame sent I0525 22:20:23.429080 6 log.go:172] (0xc0028fec60) (0xc002752c80) Stream removed, broadcasting: 1 I0525 22:20:23.429307 6 log.go:172] (0xc0028fec60) Go away received I0525 22:20:23.429381 6 log.go:172] (0xc0028fec60) (0xc002752c80) Stream removed, broadcasting: 1 I0525 22:20:23.429433 6 log.go:172] (0xc0028fec60) (0xc002240000) Stream removed, broadcasting: 3 I0525 22:20:23.429459 6 log.go:172] (0xc0028fec60) (0xc002931e00) Stream removed, broadcasting: 5 May 25 22:20:23.429: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount May 25 22:20:23.429: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4164 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 25 22:20:23.429: INFO: >>> kubeConfig: /root/.kube/config I0525 22:20:23.468598 6 log.go:172] (0xc001678e70) (0xc0014181e0) Create stream I0525 22:20:23.468645 6 log.go:172] (0xc001678e70) (0xc0014181e0) Stream added, broadcasting: 1 I0525 22:20:23.470757 6 log.go:172] (0xc001678e70) Reply frame received for 1 I0525 22:20:23.470792 6 log.go:172] (0xc001678e70) (0xc0022403c0) Create stream I0525 22:20:23.470804 6 log.go:172] (0xc001678e70) (0xc0022403c0) Stream added, broadcasting: 3 I0525 22:20:23.471810 6 log.go:172] (0xc001678e70) Reply frame received for 3 I0525 22:20:23.471851 6 log.go:172] (0xc001678e70) (0xc001418460) Create stream I0525 22:20:23.471867 6 log.go:172] (0xc001678e70) (0xc001418460) Stream added, broadcasting: 5 I0525 22:20:23.472803 6 log.go:172] (0xc001678e70) Reply frame received for 5 I0525 22:20:23.541995 6 log.go:172] (0xc001678e70) Data frame received for 5 I0525 22:20:23.542057 6 log.go:172] (0xc001418460) (5) Data frame handling I0525 22:20:23.542082 6 log.go:172] (0xc001678e70) Data frame received for 3 I0525 22:20:23.542096 6 log.go:172] (0xc0022403c0) (3) Data frame handling I0525 22:20:23.542119 6 log.go:172] (0xc0022403c0) (3) Data frame sent I0525 22:20:23.542132 6 log.go:172] (0xc001678e70) Data frame received for 3 I0525 22:20:23.542137 6 log.go:172] (0xc0022403c0) (3) Data frame handling I0525 22:20:23.543621 6 log.go:172] (0xc001678e70) Data frame received for 1 I0525 22:20:23.543649 6 log.go:172] (0xc0014181e0) (1) Data frame handling I0525 22:20:23.543667 6 log.go:172] (0xc0014181e0) (1) Data frame sent I0525 22:20:23.543681 6 log.go:172] (0xc001678e70) (0xc0014181e0) Stream removed, broadcasting: 1 I0525 22:20:23.543702 6 log.go:172] (0xc001678e70) Go away received I0525 22:20:23.543894 6 log.go:172] (0xc001678e70) (0xc0014181e0) Stream removed, broadcasting: 1 I0525 22:20:23.543918 6 log.go:172] (0xc001678e70) (0xc0022403c0) Stream removed, broadcasting: 3 I0525 22:20:23.543934 6 log.go:172] (0xc001678e70) (0xc001418460) Stream removed, broadcasting: 5 May 25 22:20:23.543: INFO: Exec stderr: "" May 25 22:20:23.543: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4164 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 25 22:20:23.544: INFO: >>> kubeConfig: /root/.kube/config I0525 22:20:23.575655 6 log.go:172] (0xc0016e0dc0) (0xc0021ccbe0) Create stream I0525 22:20:23.575699 6 log.go:172] (0xc0016e0dc0) (0xc0021ccbe0) Stream added, broadcasting: 1 I0525 22:20:23.578627 6 log.go:172] (0xc0016e0dc0) Reply frame received for 1 I0525 22:20:23.578841 6 log.go:172] (0xc0016e0dc0) (0xc0014185a0) Create stream I0525 22:20:23.578879 6 log.go:172] (0xc0016e0dc0) (0xc0014185a0) Stream added, broadcasting: 3 I0525 22:20:23.581632 6 log.go:172] (0xc0016e0dc0) Reply frame received for 3 I0525 22:20:23.581683 6 log.go:172] (0xc0016e0dc0) (0xc002752e60) Create stream I0525 22:20:23.581700 6 log.go:172] (0xc0016e0dc0) (0xc002752e60) Stream added, broadcasting: 5 I0525 22:20:23.584621 6 log.go:172] (0xc0016e0dc0) Reply frame received for 5 I0525 22:20:23.644520 6 log.go:172] (0xc0016e0dc0) Data frame received for 3 I0525 22:20:23.644565 6 log.go:172] (0xc0014185a0) (3) Data frame handling I0525 22:20:23.644583 6 log.go:172] (0xc0014185a0) (3) Data frame sent I0525 22:20:23.644598 6 log.go:172] (0xc0016e0dc0) Data frame received for 3 I0525 22:20:23.644614 6 log.go:172] (0xc0014185a0) (3) Data frame handling I0525 22:20:23.644667 6 log.go:172] (0xc0016e0dc0) Data frame received for 5 I0525 22:20:23.644710 6 log.go:172] (0xc002752e60) (5) Data frame handling I0525 22:20:23.646448 6 log.go:172] (0xc0016e0dc0) Data frame received for 1 I0525 22:20:23.646495 6 log.go:172] (0xc0021ccbe0) (1) Data frame handling I0525 22:20:23.646517 6 log.go:172] (0xc0021ccbe0) (1) Data frame sent I0525 22:20:23.646537 6 log.go:172] (0xc0016e0dc0) (0xc0021ccbe0) Stream removed, broadcasting: 1 I0525 22:20:23.646563 6 log.go:172] (0xc0016e0dc0) Go away received I0525 22:20:23.646647 6 log.go:172] (0xc0016e0dc0) (0xc0021ccbe0) Stream removed, broadcasting: 1 I0525 22:20:23.646665 6 log.go:172] (0xc0016e0dc0) (0xc0014185a0) Stream removed, broadcasting: 3 I0525 22:20:23.646674 6 log.go:172] (0xc0016e0dc0) (0xc002752e60) Stream removed, broadcasting: 5 May 25 22:20:23.646: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true May 25 22:20:23.646: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4164 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 25 22:20:23.646: INFO: >>> kubeConfig: /root/.kube/config I0525 22:20:23.684306 6 log.go:172] (0xc0016e13f0) (0xc0021ccf00) Create stream I0525 22:20:23.684333 6 log.go:172] (0xc0016e13f0) (0xc0021ccf00) Stream added, broadcasting: 1 I0525 22:20:23.686754 6 log.go:172] (0xc0016e13f0) Reply frame received for 1 I0525 22:20:23.686797 6 log.go:172] (0xc0016e13f0) (0xc002240500) Create stream I0525 22:20:23.686813 6 log.go:172] (0xc0016e13f0) (0xc002240500) Stream added, broadcasting: 3 I0525 22:20:23.687847 6 log.go:172] (0xc0016e13f0) Reply frame received for 3 I0525 22:20:23.687889 6 log.go:172] (0xc0016e13f0) (0xc002752fa0) Create stream I0525 22:20:23.687902 6 log.go:172] (0xc0016e13f0) (0xc002752fa0) Stream added, broadcasting: 5 I0525 22:20:23.689324 6 log.go:172] (0xc0016e13f0) Reply frame received for 5 I0525 22:20:23.772967 6 log.go:172] (0xc0016e13f0) Data frame received for 3 I0525 22:20:23.773001 6 log.go:172] (0xc002240500) (3) Data frame handling I0525 22:20:23.773009 6 log.go:172] (0xc002240500) (3) Data frame sent I0525 22:20:23.773015 6 log.go:172] (0xc0016e13f0) Data frame received for 3 I0525 22:20:23.773020 6 log.go:172] (0xc002240500) (3) Data frame handling I0525 22:20:23.773030 6 log.go:172] (0xc0016e13f0) Data frame received for 5 I0525 22:20:23.773048 6 log.go:172] (0xc002752fa0) (5) Data frame handling I0525 22:20:23.774992 6 log.go:172] (0xc0016e13f0) Data frame received for 1 I0525 22:20:23.775007 6 log.go:172] (0xc0021ccf00) (1) Data frame handling I0525 22:20:23.775027 6 log.go:172] (0xc0021ccf00) (1) Data frame sent I0525 22:20:23.775039 6 log.go:172] (0xc0016e13f0) (0xc0021ccf00) Stream removed, broadcasting: 1 I0525 22:20:23.775119 6 log.go:172] (0xc0016e13f0) (0xc0021ccf00) Stream removed, broadcasting: 1 I0525 22:20:23.775138 6 log.go:172] (0xc0016e13f0) (0xc002240500) Stream removed, broadcasting: 3 I0525 22:20:23.775308 6 log.go:172] (0xc0016e13f0) (0xc002752fa0) Stream removed, broadcasting: 5 May 25 22:20:23.775: INFO: Exec stderr: "" May 25 22:20:23.775: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4164 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 25 22:20:23.775: INFO: >>> kubeConfig: /root/.kube/config I0525 22:20:23.775407 6 log.go:172] (0xc0016e13f0) Go away received I0525 22:20:23.811514 6 log.go:172] (0xc0016e1a20) (0xc0021cd0e0) Create stream I0525 22:20:23.811543 6 log.go:172] (0xc0016e1a20) (0xc0021cd0e0) Stream added, broadcasting: 1 I0525 22:20:23.819890 6 log.go:172] (0xc0016e1a20) Reply frame received for 1 I0525 22:20:23.819942 6 log.go:172] (0xc0016e1a20) (0xc002240640) Create stream I0525 22:20:23.819955 6 log.go:172] (0xc0016e1a20) (0xc002240640) Stream added, broadcasting: 3 I0525 22:20:23.844422 6 log.go:172] (0xc0016e1a20) Reply frame received for 3 I0525 22:20:23.844470 6 log.go:172] (0xc0016e1a20) (0xc001018aa0) Create stream I0525 22:20:23.844481 6 log.go:172] (0xc0016e1a20) (0xc001018aa0) Stream added, broadcasting: 5 I0525 22:20:23.845456 6 log.go:172] (0xc0016e1a20) Reply frame received for 5 I0525 22:20:23.912113 6 log.go:172] (0xc0016e1a20) Data frame received for 5 I0525 22:20:23.912140 6 log.go:172] (0xc001018aa0) (5) Data frame handling I0525 22:20:23.912186 6 log.go:172] (0xc0016e1a20) Data frame received for 3 I0525 22:20:23.912213 6 log.go:172] (0xc002240640) (3) Data frame handling I0525 22:20:23.912224 6 log.go:172] (0xc002240640) (3) Data frame sent I0525 22:20:23.912238 6 log.go:172] (0xc0016e1a20) Data frame received for 3 I0525 22:20:23.912245 6 log.go:172] (0xc002240640) (3) Data frame handling I0525 22:20:23.913529 6 log.go:172] (0xc0016e1a20) Data frame received for 1 I0525 22:20:23.913551 6 log.go:172] (0xc0021cd0e0) (1) Data frame handling I0525 22:20:23.913577 6 log.go:172] (0xc0021cd0e0) (1) Data frame sent I0525 22:20:23.913880 6 log.go:172] (0xc0016e1a20) (0xc0021cd0e0) Stream removed, broadcasting: 1 I0525 22:20:23.913906 6 log.go:172] (0xc0016e1a20) Go away received I0525 22:20:23.914075 6 log.go:172] (0xc0016e1a20) (0xc0021cd0e0) Stream removed, broadcasting: 1 I0525 22:20:23.914101 6 log.go:172] (0xc0016e1a20) (0xc002240640) Stream removed, broadcasting: 3 I0525 22:20:23.914124 6 log.go:172] (0xc0016e1a20) (0xc001018aa0) Stream removed, broadcasting: 5 May 25 22:20:23.914: INFO: Exec stderr: "" May 25 22:20:23.914: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4164 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 25 22:20:23.914: INFO: >>> kubeConfig: /root/.kube/config I0525 22:20:23.946946 6 log.go:172] (0xc00142be40) (0xc001018fa0) Create stream I0525 22:20:23.946975 6 log.go:172] (0xc00142be40) (0xc001018fa0) Stream added, broadcasting: 1 I0525 22:20:23.948700 6 log.go:172] (0xc00142be40) Reply frame received for 1 I0525 22:20:23.948738 6 log.go:172] (0xc00142be40) (0xc001019220) Create stream I0525 22:20:23.948747 6 log.go:172] (0xc00142be40) (0xc001019220) Stream added, broadcasting: 3 I0525 22:20:23.949805 6 log.go:172] (0xc00142be40) Reply frame received for 3 I0525 22:20:23.949840 6 log.go:172] (0xc00142be40) (0xc001418960) Create stream I0525 22:20:23.949855 6 log.go:172] (0xc00142be40) (0xc001418960) Stream added, broadcasting: 5 I0525 22:20:23.950624 6 log.go:172] (0xc00142be40) Reply frame received for 5 I0525 22:20:24.008637 6 log.go:172] (0xc00142be40) Data frame received for 5 I0525 22:20:24.008672 6 log.go:172] (0xc001418960) (5) Data frame handling I0525 22:20:24.008694 6 log.go:172] (0xc00142be40) Data frame received for 3 I0525 22:20:24.008704 6 log.go:172] (0xc001019220) (3) Data frame handling I0525 22:20:24.008718 6 log.go:172] (0xc001019220) (3) Data frame sent I0525 22:20:24.008730 6 log.go:172] (0xc00142be40) Data frame received for 3 I0525 22:20:24.008740 6 log.go:172] (0xc001019220) (3) Data frame handling I0525 22:20:24.010440 6 log.go:172] (0xc00142be40) Data frame received for 1 I0525 22:20:24.010466 6 log.go:172] (0xc001018fa0) (1) Data frame handling I0525 22:20:24.010478 6 log.go:172] (0xc001018fa0) (1) Data frame sent I0525 22:20:24.010495 6 log.go:172] (0xc00142be40) (0xc001018fa0) Stream removed, broadcasting: 1 I0525 22:20:24.010527 6 log.go:172] (0xc00142be40) Go away received I0525 22:20:24.010691 6 log.go:172] (0xc00142be40) (0xc001018fa0) Stream removed, broadcasting: 1 I0525 22:20:24.010728 6 log.go:172] (0xc00142be40) (0xc001019220) Stream removed, broadcasting: 3 I0525 22:20:24.010744 6 log.go:172] (0xc00142be40) (0xc001418960) Stream removed, broadcasting: 5 May 25 22:20:24.010: INFO: Exec stderr: "" May 25 22:20:24.010: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4164 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 25 22:20:24.010: INFO: >>> kubeConfig: /root/.kube/config I0525 22:20:24.045654 6 log.go:172] (0xc002652000) (0xc0021cd400) Create stream I0525 22:20:24.045682 6 log.go:172] (0xc002652000) (0xc0021cd400) Stream added, broadcasting: 1 I0525 22:20:24.047173 6 log.go:172] (0xc002652000) Reply frame received for 1 I0525 22:20:24.047206 6 log.go:172] (0xc002652000) (0xc001418aa0) Create stream I0525 22:20:24.047215 6 log.go:172] (0xc002652000) (0xc001418aa0) Stream added, broadcasting: 3 I0525 22:20:24.048086 6 log.go:172] (0xc002652000) Reply frame received for 3 I0525 22:20:24.048123 6 log.go:172] (0xc002652000) (0xc001418b40) Create stream I0525 22:20:24.048136 6 log.go:172] (0xc002652000) (0xc001418b40) Stream added, broadcasting: 5 I0525 22:20:24.049008 6 log.go:172] (0xc002652000) Reply frame received for 5 I0525 22:20:24.105537 6 log.go:172] (0xc002652000) Data frame received for 5 I0525 22:20:24.105584 6 log.go:172] (0xc001418b40) (5) Data frame handling I0525 22:20:24.105643 6 log.go:172] (0xc002652000) Data frame received for 3 I0525 22:20:24.105657 6 log.go:172] (0xc001418aa0) (3) Data frame handling I0525 22:20:24.105679 6 log.go:172] (0xc001418aa0) (3) Data frame sent I0525 22:20:24.105699 6 log.go:172] (0xc002652000) Data frame received for 3 I0525 22:20:24.105712 6 log.go:172] (0xc001418aa0) (3) Data frame handling I0525 22:20:24.107056 6 log.go:172] (0xc002652000) Data frame received for 1 I0525 22:20:24.107082 6 log.go:172] (0xc0021cd400) (1) Data frame handling I0525 22:20:24.107107 6 log.go:172] (0xc0021cd400) (1) Data frame sent I0525 22:20:24.107128 6 log.go:172] (0xc002652000) (0xc0021cd400) Stream removed, broadcasting: 1 I0525 22:20:24.107150 6 log.go:172] (0xc002652000) Go away received I0525 22:20:24.107348 6 log.go:172] (0xc002652000) (0xc0021cd400) Stream removed, broadcasting: 1 I0525 22:20:24.107381 6 log.go:172] (0xc002652000) (0xc001418aa0) Stream removed, broadcasting: 3 I0525 22:20:24.107407 6 log.go:172] (0xc002652000) (0xc001418b40) Stream removed, broadcasting: 5 May 25 22:20:24.107: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 22:20:24.107: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-4164" for this suite. • [SLOW TEST:11.282 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":260,"skipped":4303,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 22:20:24.116: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-5248 STEP: creating a selector STEP: Creating the service pods in kubernetes May 25 22:20:24.171: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 25 22:20:46.423: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.22:8080/dial?request=hostname&protocol=udp&host=10.244.1.229&port=8081&tries=1'] Namespace:pod-network-test-5248 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 25 22:20:46.423: INFO: >>> kubeConfig: /root/.kube/config I0525 22:20:46.447547 6 log.go:172] (0xc00142bd90) (0xc002930c80) Create stream I0525 22:20:46.447581 6 log.go:172] (0xc00142bd90) (0xc002930c80) Stream added, broadcasting: 1 I0525 22:20:46.449789 6 log.go:172] (0xc00142bd90) Reply frame received for 1 I0525 22:20:46.449836 6 log.go:172] (0xc00142bd90) (0xc002930fa0) Create stream I0525 22:20:46.449847 6 log.go:172] (0xc00142bd90) (0xc002930fa0) Stream added, broadcasting: 3 I0525 22:20:46.452209 6 log.go:172] (0xc00142bd90) Reply frame received for 3 I0525 22:20:46.452261 6 log.go:172] (0xc00142bd90) (0xc0010180a0) Create stream I0525 22:20:46.452289 6 log.go:172] (0xc00142bd90) (0xc0010180a0) Stream added, broadcasting: 5 I0525 22:20:46.453626 6 log.go:172] (0xc00142bd90) Reply frame received for 5 I0525 22:20:46.595456 6 log.go:172] (0xc00142bd90) Data frame received for 3 I0525 22:20:46.595504 6 log.go:172] (0xc002930fa0) (3) Data frame handling I0525 22:20:46.595531 6 log.go:172] (0xc002930fa0) (3) Data frame sent I0525 22:20:46.595817 6 log.go:172] (0xc00142bd90) Data frame received for 5 I0525 22:20:46.595900 6 log.go:172] (0xc0010180a0) (5) Data frame handling I0525 22:20:46.596125 6 log.go:172] (0xc00142bd90) Data frame received for 3 I0525 22:20:46.596166 6 log.go:172] (0xc002930fa0) (3) Data frame handling I0525 22:20:46.597923 6 log.go:172] (0xc00142bd90) Data frame received for 1 I0525 22:20:46.597960 6 log.go:172] (0xc002930c80) (1) Data frame handling I0525 22:20:46.597996 6 log.go:172] (0xc002930c80) (1) Data frame sent I0525 22:20:46.598019 6 log.go:172] (0xc00142bd90) (0xc002930c80) Stream removed, broadcasting: 1 I0525 22:20:46.598046 6 log.go:172] (0xc00142bd90) Go away received I0525 22:20:46.598120 6 log.go:172] (0xc00142bd90) (0xc002930c80) Stream removed, broadcasting: 1 I0525 22:20:46.598134 6 log.go:172] (0xc00142bd90) (0xc002930fa0) Stream removed, broadcasting: 3 I0525 22:20:46.598139 6 log.go:172] (0xc00142bd90) (0xc0010180a0) Stream removed, broadcasting: 5 May 25 22:20:46.598: INFO: Waiting for responses: map[] May 25 22:20:46.601: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.22:8080/dial?request=hostname&protocol=udp&host=10.244.2.21&port=8081&tries=1'] Namespace:pod-network-test-5248 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 25 22:20:46.601: INFO: >>> kubeConfig: /root/.kube/config I0525 22:20:46.635451 6 log.go:172] (0xc0025a4000) (0xc001018820) Create stream I0525 22:20:46.635492 6 log.go:172] (0xc0025a4000) (0xc001018820) Stream added, broadcasting: 1 I0525 22:20:46.637385 6 log.go:172] (0xc0025a4000) Reply frame received for 1 I0525 22:20:46.637417 6 log.go:172] (0xc0025a4000) (0xc001018a00) Create stream I0525 22:20:46.637428 6 log.go:172] (0xc0025a4000) (0xc001018a00) Stream added, broadcasting: 3 I0525 22:20:46.638187 6 log.go:172] (0xc0025a4000) Reply frame received for 3 I0525 22:20:46.638215 6 log.go:172] (0xc0025a4000) (0xc002931a40) Create stream I0525 22:20:46.638227 6 log.go:172] (0xc0025a4000) (0xc002931a40) Stream added, broadcasting: 5 I0525 22:20:46.638971 6 log.go:172] (0xc0025a4000) Reply frame received for 5 I0525 22:20:46.707029 6 log.go:172] (0xc0025a4000) Data frame received for 3 I0525 22:20:46.707063 6 log.go:172] (0xc001018a00) (3) Data frame handling I0525 22:20:46.707087 6 log.go:172] (0xc001018a00) (3) Data frame sent I0525 22:20:46.707863 6 log.go:172] (0xc0025a4000) Data frame received for 3 I0525 22:20:46.707890 6 log.go:172] (0xc001018a00) (3) Data frame handling I0525 22:20:46.707971 6 log.go:172] (0xc0025a4000) Data frame received for 5 I0525 22:20:46.707993 6 log.go:172] (0xc002931a40) (5) Data frame handling I0525 22:20:46.710222 6 log.go:172] (0xc0025a4000) Data frame received for 1 I0525 22:20:46.710250 6 log.go:172] (0xc001018820) (1) Data frame handling I0525 22:20:46.710272 6 log.go:172] (0xc001018820) (1) Data frame sent I0525 22:20:46.710289 6 log.go:172] (0xc0025a4000) (0xc001018820) Stream removed, broadcasting: 1 I0525 22:20:46.710370 6 log.go:172] (0xc0025a4000) (0xc001018820) Stream removed, broadcasting: 1 I0525 22:20:46.710385 6 log.go:172] (0xc0025a4000) (0xc001018a00) Stream removed, broadcasting: 3 I0525 22:20:46.710399 6 log.go:172] (0xc0025a4000) (0xc002931a40) Stream removed, broadcasting: 5 May 25 22:20:46.710: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 22:20:46.710: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready I0525 22:20:46.710773 6 log.go:172] (0xc0025a4000) Go away received STEP: Destroying namespace "pod-network-test-5248" for this suite. • [SLOW TEST:22.604 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":261,"skipped":4321,"failed":0} SSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 22:20:46.720: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on node default medium May 25 22:20:46.834: INFO: Waiting up to 5m0s for pod "pod-4b24cd5c-5c2f-4a36-bf5d-813c263372e7" in namespace "emptydir-576" to be "success or failure" May 25 22:20:46.847: INFO: Pod "pod-4b24cd5c-5c2f-4a36-bf5d-813c263372e7": Phase="Pending", Reason="", readiness=false. Elapsed: 13.108089ms May 25 22:20:48.851: INFO: Pod "pod-4b24cd5c-5c2f-4a36-bf5d-813c263372e7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016987869s May 25 22:20:50.854: INFO: Pod "pod-4b24cd5c-5c2f-4a36-bf5d-813c263372e7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01972132s STEP: Saw pod success May 25 22:20:50.854: INFO: Pod "pod-4b24cd5c-5c2f-4a36-bf5d-813c263372e7" satisfied condition "success or failure" May 25 22:20:50.856: INFO: Trying to get logs from node jerma-worker2 pod pod-4b24cd5c-5c2f-4a36-bf5d-813c263372e7 container test-container: STEP: delete the pod May 25 22:20:50.962: INFO: Waiting for pod pod-4b24cd5c-5c2f-4a36-bf5d-813c263372e7 to disappear May 25 22:20:50.979: INFO: Pod pod-4b24cd5c-5c2f-4a36-bf5d-813c263372e7 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 22:20:50.979: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-576" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":262,"skipped":4325,"failed":0} S ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 22:20:50.987: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 22:21:02.287: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-124" for this suite. • [SLOW TEST:11.351 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":278,"completed":263,"skipped":4326,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 22:21:02.338: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod liveness-3e58fbcf-61fe-4104-a00c-370756892bda in namespace container-probe-6350 May 25 22:21:06.505: INFO: Started pod liveness-3e58fbcf-61fe-4104-a00c-370756892bda in namespace container-probe-6350 STEP: checking the pod's current state and verifying that restartCount is present May 25 22:21:06.507: INFO: Initial restart count of pod liveness-3e58fbcf-61fe-4104-a00c-370756892bda is 0 May 25 22:21:18.551: INFO: Restart count of pod container-probe-6350/liveness-3e58fbcf-61fe-4104-a00c-370756892bda is now 1 (12.04395768s elapsed) May 25 22:21:38.595: INFO: Restart count of pod container-probe-6350/liveness-3e58fbcf-61fe-4104-a00c-370756892bda is now 2 (32.087796473s elapsed) May 25 22:21:58.639: INFO: Restart count of pod container-probe-6350/liveness-3e58fbcf-61fe-4104-a00c-370756892bda is now 3 (52.131734906s elapsed) May 25 22:22:18.681: INFO: Restart count of pod container-probe-6350/liveness-3e58fbcf-61fe-4104-a00c-370756892bda is now 4 (1m12.174489895s elapsed) May 25 22:23:28.860: INFO: Restart count of pod container-probe-6350/liveness-3e58fbcf-61fe-4104-a00c-370756892bda is now 5 (2m22.353457843s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 22:23:28.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6350" for this suite. • [SLOW TEST:146.567 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":278,"completed":264,"skipped":4337,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 22:23:28.904: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 22:23:46.058: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4221" for this suite. • [SLOW TEST:17.162 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":278,"completed":265,"skipped":4345,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 22:23:46.067: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-upd-8ef6f94c-d979-4e9d-950d-110e822b282b STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 22:23:52.219: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5126" for this suite. • [SLOW TEST:6.159 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":266,"skipped":4374,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 22:23:52.227: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready May 25 22:23:52.845: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set May 25 22:23:54.853: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726042232, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726042232, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726042232, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726042232, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 25 22:23:57.921: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 25 22:23:57.956: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 22:23:59.245: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-210" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136 • [SLOW TEST:7.146 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":278,"completed":267,"skipped":4393,"failed":0} SSSSSS ------------------------------ [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 22:23:59.374: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2290 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-2290;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2290 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-2290;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2290.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-2290.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2290.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-2290.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-2290.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-2290.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-2290.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-2290.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-2290.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-2290.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-2290.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-2290.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2290.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 167.182.110.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.110.182.167_udp@PTR;check="$$(dig +tcp +noall +answer +search 167.182.110.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.110.182.167_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2290 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-2290;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2290 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-2290;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2290.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-2290.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2290.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-2290.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-2290.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-2290.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-2290.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-2290.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-2290.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-2290.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-2290.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-2290.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2290.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 167.182.110.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.110.182.167_udp@PTR;check="$$(dig +tcp +noall +answer +search 167.182.110.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.110.182.167_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 25 22:24:07.609: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2290/dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff: the server could not find the requested resource (get pods dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff) May 25 22:24:07.612: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2290/dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff: the server could not find the requested resource (get pods dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff) May 25 22:24:07.614: INFO: Unable to read wheezy_udp@dns-test-service.dns-2290 from pod dns-2290/dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff: the server could not find the requested resource (get pods dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff) May 25 22:24:07.616: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2290 from pod dns-2290/dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff: the server could not find the requested resource (get pods dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff) May 25 22:24:07.619: INFO: Unable to read wheezy_udp@dns-test-service.dns-2290.svc from pod dns-2290/dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff: the server could not find the requested resource (get pods dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff) May 25 22:24:07.622: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2290.svc from pod dns-2290/dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff: the server could not find the requested resource (get pods dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff) May 25 22:24:07.624: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2290.svc from pod dns-2290/dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff: the server could not find the requested resource (get pods dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff) May 25 22:24:07.627: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2290.svc from pod dns-2290/dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff: the server could not find the requested resource (get pods dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff) May 25 22:24:07.656: INFO: Unable to read jessie_udp@dns-test-service from pod dns-2290/dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff: the server could not find the requested resource (get pods dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff) May 25 22:24:07.659: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-2290/dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff: the server could not find the requested resource (get pods dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff) May 25 22:24:07.662: INFO: Unable to read jessie_udp@dns-test-service.dns-2290 from pod dns-2290/dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff: the server could not find the requested resource (get pods dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff) May 25 22:24:07.664: INFO: Unable to read jessie_tcp@dns-test-service.dns-2290 from pod dns-2290/dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff: the server could not find the requested resource (get pods dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff) May 25 22:24:07.666: INFO: Unable to read jessie_udp@dns-test-service.dns-2290.svc from pod dns-2290/dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff: the server could not find the requested resource (get pods dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff) May 25 22:24:07.668: INFO: Unable to read jessie_tcp@dns-test-service.dns-2290.svc from pod dns-2290/dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff: the server could not find the requested resource (get pods dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff) May 25 22:24:07.670: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2290.svc from pod dns-2290/dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff: the server could not find the requested resource (get pods dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff) May 25 22:24:07.672: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2290.svc from pod dns-2290/dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff: the server could not find the requested resource (get pods dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff) May 25 22:24:07.687: INFO: Lookups using dns-2290/dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2290 wheezy_tcp@dns-test-service.dns-2290 wheezy_udp@dns-test-service.dns-2290.svc wheezy_tcp@dns-test-service.dns-2290.svc wheezy_udp@_http._tcp.dns-test-service.dns-2290.svc wheezy_tcp@_http._tcp.dns-test-service.dns-2290.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2290 jessie_tcp@dns-test-service.dns-2290 jessie_udp@dns-test-service.dns-2290.svc jessie_tcp@dns-test-service.dns-2290.svc jessie_udp@_http._tcp.dns-test-service.dns-2290.svc jessie_tcp@_http._tcp.dns-test-service.dns-2290.svc] May 25 22:24:12.692: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2290/dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff: the server could not find the requested resource (get pods dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff) May 25 22:24:12.696: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2290/dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff: the server could not find the requested resource (get pods dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff) May 25 22:24:12.700: INFO: Unable to read wheezy_udp@dns-test-service.dns-2290 from pod dns-2290/dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff: the server could not find the requested resource (get pods dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff) May 25 22:24:12.704: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2290 from pod dns-2290/dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff: the server could not find the requested resource (get pods dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff) May 25 22:24:12.707: INFO: Unable to read wheezy_udp@dns-test-service.dns-2290.svc from pod dns-2290/dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff: the server could not find the requested resource (get pods dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff) May 25 22:24:12.710: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2290.svc from pod dns-2290/dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff: the server could not find the requested resource (get pods dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff) May 25 22:24:12.714: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2290.svc from pod dns-2290/dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff: the server could not find the requested resource (get pods dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff) May 25 22:24:12.717: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2290.svc from pod dns-2290/dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff: the server could not find the requested resource (get pods dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff) May 25 22:24:12.739: INFO: Unable to read jessie_udp@dns-test-service from pod dns-2290/dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff: the server could not find the requested resource (get pods dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff) May 25 22:24:12.742: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-2290/dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff: the server could not find the requested resource (get pods dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff) May 25 22:24:12.745: INFO: Unable to read jessie_udp@dns-test-service.dns-2290 from pod dns-2290/dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff: the server could not find the requested resource (get pods dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff) May 25 22:24:12.748: INFO: Unable to read jessie_tcp@dns-test-service.dns-2290 from pod dns-2290/dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff: the server could not find the requested resource (get pods dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff) May 25 22:24:12.752: INFO: Unable to read jessie_udp@dns-test-service.dns-2290.svc from pod dns-2290/dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff: the server could not find the requested resource (get pods dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff) May 25 22:24:12.756: INFO: Unable to read jessie_tcp@dns-test-service.dns-2290.svc from pod dns-2290/dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff: the server could not find the requested resource (get pods dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff) May 25 22:24:12.760: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2290.svc from pod dns-2290/dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff: the server could not find the requested resource (get pods dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff) May 25 22:24:12.763: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2290.svc from pod dns-2290/dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff: the server could not find the requested resource (get pods dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff) May 25 22:24:12.779: INFO: Lookups using dns-2290/dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2290 wheezy_tcp@dns-test-service.dns-2290 wheezy_udp@dns-test-service.dns-2290.svc wheezy_tcp@dns-test-service.dns-2290.svc wheezy_udp@_http._tcp.dns-test-service.dns-2290.svc wheezy_tcp@_http._tcp.dns-test-service.dns-2290.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2290 jessie_tcp@dns-test-service.dns-2290 jessie_udp@dns-test-service.dns-2290.svc jessie_tcp@dns-test-service.dns-2290.svc jessie_udp@_http._tcp.dns-test-service.dns-2290.svc jessie_tcp@_http._tcp.dns-test-service.dns-2290.svc] May 25 22:24:17.694: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2290/dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff: the server could not find the requested resource (get pods dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff) May 25 22:24:17.698: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2290/dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff: the server could not find the requested resource (get pods dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff) May 25 22:24:17.703: INFO: Unable to read wheezy_udp@dns-test-service.dns-2290 from pod dns-2290/dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff: the server could not find the requested resource (get pods dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff) May 25 22:24:17.706: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2290 from pod dns-2290/dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff: the server could not find the requested resource (get pods dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff) May 25 22:24:17.709: INFO: Unable to read wheezy_udp@dns-test-service.dns-2290.svc from pod dns-2290/dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff: the server could not find the requested resource (get pods dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff) May 25 22:24:17.712: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2290.svc from pod dns-2290/dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff: the server could not find the requested resource (get pods dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff) May 25 22:24:17.715: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2290.svc from pod dns-2290/dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff: the server could not find the requested resource (get pods dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff) May 25 22:24:17.718: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2290.svc from pod dns-2290/dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff: the server could not find the requested resource (get pods dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff) May 25 22:24:17.740: INFO: Unable to read jessie_udp@dns-test-service from pod dns-2290/dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff: the server could not find the requested resource (get pods dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff) May 25 22:24:17.743: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-2290/dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff: the server could not find the requested resource (get pods dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff) May 25 22:24:17.746: INFO: Unable to read jessie_udp@dns-test-service.dns-2290 from pod dns-2290/dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff: the server could not find the requested resource (get pods dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff) May 25 22:24:17.749: INFO: Unable to read jessie_tcp@dns-test-service.dns-2290 from pod dns-2290/dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff: the server could not find the requested resource (get pods dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff) May 25 22:24:17.752: INFO: Unable to read jessie_udp@dns-test-service.dns-2290.svc from pod dns-2290/dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff: the server could not find the requested resource (get pods dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff) May 25 22:24:17.755: INFO: Unable to read jessie_tcp@dns-test-service.dns-2290.svc from pod dns-2290/dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff: the server could not find the requested resource (get pods dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff) May 25 22:24:17.758: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2290.svc from pod dns-2290/dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff: the server could not find the requested resource (get pods dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff) May 25 22:24:17.761: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2290.svc from pod dns-2290/dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff: the server could not find the requested resource (get pods dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff) May 25 22:24:17.779: INFO: Lookups using dns-2290/dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2290 wheezy_tcp@dns-test-service.dns-2290 wheezy_udp@dns-test-service.dns-2290.svc wheezy_tcp@dns-test-service.dns-2290.svc wheezy_udp@_http._tcp.dns-test-service.dns-2290.svc wheezy_tcp@_http._tcp.dns-test-service.dns-2290.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2290 jessie_tcp@dns-test-service.dns-2290 jessie_udp@dns-test-service.dns-2290.svc jessie_tcp@dns-test-service.dns-2290.svc jessie_udp@_http._tcp.dns-test-service.dns-2290.svc jessie_tcp@_http._tcp.dns-test-service.dns-2290.svc] May 25 22:24:22.692: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2290/dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff: the server could not find the requested resource (get pods dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff) May 25 22:24:22.695: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2290/dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff: the server could not find the requested resource (get pods dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff) May 25 22:24:22.698: INFO: Unable to read wheezy_udp@dns-test-service.dns-2290 from pod dns-2290/dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff: the server could not find the requested resource (get pods dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff) May 25 22:24:22.702: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2290 from pod dns-2290/dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff: the server could not find the requested resource (get pods dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff) May 25 22:24:22.704: INFO: Unable to read wheezy_udp@dns-test-service.dns-2290.svc from pod dns-2290/dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff: the server could not find the requested resource (get pods dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff) May 25 22:24:22.707: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2290.svc from pod dns-2290/dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff: the server could not find the requested resource (get pods dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff) May 25 22:24:22.710: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2290.svc from pod dns-2290/dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff: the server could not find the requested resource (get pods dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff) May 25 22:24:22.713: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2290.svc from pod dns-2290/dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff: the server could not find the requested resource (get pods dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff) May 25 22:24:22.813: INFO: Unable to read jessie_udp@dns-test-service from pod dns-2290/dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff: the server could not find the requested resource (get pods dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff) May 25 22:24:22.816: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-2290/dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff: the server could not find the requested resource (get pods dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff) May 25 22:24:22.819: INFO: Unable to read jessie_udp@dns-test-service.dns-2290 from pod dns-2290/dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff: the server could not find the requested resource (get pods dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff) May 25 22:24:22.821: INFO: Unable to read jessie_tcp@dns-test-service.dns-2290 from pod dns-2290/dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff: the server could not find the requested resource (get pods dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff) May 25 22:24:22.823: INFO: Unable to read jessie_udp@dns-test-service.dns-2290.svc from pod dns-2290/dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff: the server could not find the requested resource (get pods dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff) May 25 22:24:22.825: INFO: Unable to read jessie_tcp@dns-test-service.dns-2290.svc from pod dns-2290/dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff: the server could not find the requested resource (get pods dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff) May 25 22:24:22.827: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2290.svc from pod dns-2290/dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff: the server could not find the requested resource (get pods dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff) May 25 22:24:22.829: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2290.svc from pod dns-2290/dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff: the server could not find the requested resource (get pods dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff) May 25 22:24:22.847: INFO: Lookups using dns-2290/dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2290 wheezy_tcp@dns-test-service.dns-2290 wheezy_udp@dns-test-service.dns-2290.svc wheezy_tcp@dns-test-service.dns-2290.svc wheezy_udp@_http._tcp.dns-test-service.dns-2290.svc wheezy_tcp@_http._tcp.dns-test-service.dns-2290.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2290 jessie_tcp@dns-test-service.dns-2290 jessie_udp@dns-test-service.dns-2290.svc jessie_tcp@dns-test-service.dns-2290.svc jessie_udp@_http._tcp.dns-test-service.dns-2290.svc jessie_tcp@_http._tcp.dns-test-service.dns-2290.svc] May 25 22:24:27.693: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2290/dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff: the server could not find the requested resource (get pods dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff) May 25 22:24:27.697: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2290/dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff: the server could not find the requested resource (get pods dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff) May 25 22:24:27.700: INFO: Unable to read wheezy_udp@dns-test-service.dns-2290 from pod dns-2290/dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff: the server could not find the requested resource (get pods dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff) May 25 22:24:27.704: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2290 from pod dns-2290/dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff: the server could not find the requested resource (get pods dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff) May 25 22:24:27.707: INFO: Unable to read wheezy_udp@dns-test-service.dns-2290.svc from pod dns-2290/dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff: the server could not find the requested resource (get pods dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff) May 25 22:24:27.710: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2290.svc from pod dns-2290/dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff: the server could not find the requested resource (get pods dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff) May 25 22:24:27.712: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2290.svc from pod dns-2290/dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff: the server could not find the requested resource (get pods dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff) May 25 22:24:27.714: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2290.svc from pod dns-2290/dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff: the server could not find the requested resource (get pods dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff) May 25 22:24:27.734: INFO: Unable to read jessie_udp@dns-test-service from pod dns-2290/dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff: the server could not find the requested resource (get pods dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff) May 25 22:24:27.736: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-2290/dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff: the server could not find the requested resource (get pods dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff) May 25 22:24:27.739: INFO: Unable to read jessie_udp@dns-test-service.dns-2290 from pod dns-2290/dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff: the server could not find the requested resource (get pods dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff) May 25 22:24:27.742: INFO: Unable to read jessie_tcp@dns-test-service.dns-2290 from pod dns-2290/dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff: the server could not find the requested resource (get pods dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff) May 25 22:24:27.746: INFO: Unable to read jessie_udp@dns-test-service.dns-2290.svc from pod dns-2290/dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff: the server could not find the requested resource (get pods dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff) May 25 22:24:27.748: INFO: Unable to read jessie_tcp@dns-test-service.dns-2290.svc from pod dns-2290/dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff: the server could not find the requested resource (get pods dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff) May 25 22:24:27.751: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2290.svc from pod dns-2290/dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff: the server could not find the requested resource (get pods dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff) May 25 22:24:27.754: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2290.svc from pod dns-2290/dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff: the server could not find the requested resource (get pods dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff) May 25 22:24:27.772: INFO: Lookups using dns-2290/dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2290 wheezy_tcp@dns-test-service.dns-2290 wheezy_udp@dns-test-service.dns-2290.svc wheezy_tcp@dns-test-service.dns-2290.svc wheezy_udp@_http._tcp.dns-test-service.dns-2290.svc wheezy_tcp@_http._tcp.dns-test-service.dns-2290.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2290 jessie_tcp@dns-test-service.dns-2290 jessie_udp@dns-test-service.dns-2290.svc jessie_tcp@dns-test-service.dns-2290.svc jessie_udp@_http._tcp.dns-test-service.dns-2290.svc jessie_tcp@_http._tcp.dns-test-service.dns-2290.svc] May 25 22:24:32.693: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2290/dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff: the server could not find the requested resource (get pods dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff) May 25 22:24:32.697: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2290/dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff: the server could not find the requested resource (get pods dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff) May 25 22:24:32.700: INFO: Unable to read wheezy_udp@dns-test-service.dns-2290 from pod dns-2290/dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff: the server could not find the requested resource (get pods dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff) May 25 22:24:32.703: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2290 from pod dns-2290/dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff: the server could not find the requested resource (get pods dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff) May 25 22:24:32.707: INFO: Unable to read wheezy_udp@dns-test-service.dns-2290.svc from pod dns-2290/dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff: the server could not find the requested resource (get pods dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff) May 25 22:24:32.709: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2290.svc from pod dns-2290/dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff: the server could not find the requested resource (get pods dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff) May 25 22:24:32.711: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2290.svc from pod dns-2290/dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff: the server could not find the requested resource (get pods dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff) May 25 22:24:32.714: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2290.svc from pod dns-2290/dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff: the server could not find the requested resource (get pods dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff) May 25 22:24:32.733: INFO: Unable to read jessie_udp@dns-test-service from pod dns-2290/dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff: the server could not find the requested resource (get pods dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff) May 25 22:24:32.736: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-2290/dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff: the server could not find the requested resource (get pods dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff) May 25 22:24:32.738: INFO: Unable to read jessie_udp@dns-test-service.dns-2290 from pod dns-2290/dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff: the server could not find the requested resource (get pods dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff) May 25 22:24:32.741: INFO: Unable to read jessie_tcp@dns-test-service.dns-2290 from pod dns-2290/dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff: the server could not find the requested resource (get pods dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff) May 25 22:24:32.743: INFO: Unable to read jessie_udp@dns-test-service.dns-2290.svc from pod dns-2290/dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff: the server could not find the requested resource (get pods dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff) May 25 22:24:32.746: INFO: Unable to read jessie_tcp@dns-test-service.dns-2290.svc from pod dns-2290/dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff: the server could not find the requested resource (get pods dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff) May 25 22:24:32.749: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2290.svc from pod dns-2290/dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff: the server could not find the requested resource (get pods dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff) May 25 22:24:32.751: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2290.svc from pod dns-2290/dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff: the server could not find the requested resource (get pods dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff) May 25 22:24:32.769: INFO: Lookups using dns-2290/dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2290 wheezy_tcp@dns-test-service.dns-2290 wheezy_udp@dns-test-service.dns-2290.svc wheezy_tcp@dns-test-service.dns-2290.svc wheezy_udp@_http._tcp.dns-test-service.dns-2290.svc wheezy_tcp@_http._tcp.dns-test-service.dns-2290.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2290 jessie_tcp@dns-test-service.dns-2290 jessie_udp@dns-test-service.dns-2290.svc jessie_tcp@dns-test-service.dns-2290.svc jessie_udp@_http._tcp.dns-test-service.dns-2290.svc jessie_tcp@_http._tcp.dns-test-service.dns-2290.svc] May 25 22:24:37.771: INFO: DNS probes using dns-2290/dns-test-8b4fe926-7b27-48fc-88fb-c6aed06597ff succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 22:24:38.506: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-2290" for this suite. • [SLOW TEST:39.366 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":278,"completed":268,"skipped":4399,"failed":0} SSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 22:24:38.740: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on node default medium May 25 22:24:38.848: INFO: Waiting up to 5m0s for pod "pod-179c1630-af52-4661-8b05-8c2f8a23e0e0" in namespace "emptydir-6959" to be "success or failure" May 25 22:24:38.858: INFO: Pod "pod-179c1630-af52-4661-8b05-8c2f8a23e0e0": Phase="Pending", Reason="", readiness=false. Elapsed: 10.165298ms May 25 22:24:40.871: INFO: Pod "pod-179c1630-af52-4661-8b05-8c2f8a23e0e0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02231883s May 25 22:24:42.891: INFO: Pod "pod-179c1630-af52-4661-8b05-8c2f8a23e0e0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.043116554s STEP: Saw pod success May 25 22:24:42.891: INFO: Pod "pod-179c1630-af52-4661-8b05-8c2f8a23e0e0" satisfied condition "success or failure" May 25 22:24:42.894: INFO: Trying to get logs from node jerma-worker pod pod-179c1630-af52-4661-8b05-8c2f8a23e0e0 container test-container: STEP: delete the pod May 25 22:24:42.925: INFO: Waiting for pod pod-179c1630-af52-4661-8b05-8c2f8a23e0e0 to disappear May 25 22:24:42.930: INFO: Pod pod-179c1630-af52-4661-8b05-8c2f8a23e0e0 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 22:24:42.930: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6959" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":269,"skipped":4405,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 22:24:43.182: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0525 22:24:53.322815 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 25 22:24:53.322: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 22:24:53.322: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-2473" for this suite. • [SLOW TEST:10.150 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":278,"completed":270,"skipped":4411,"failed":0} SSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 22:24:53.332: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod May 25 22:24:57.931: INFO: Successfully updated pod "labelsupdate26d72a30-2e24-4b04-9b8e-999891a9c737" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 22:24:59.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2632" for this suite. • [SLOW TEST:6.653 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":271,"skipped":4418,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 22:24:59.986: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-411e6a9f-4503-4abf-95ea-edd38bf23e1f STEP: Creating a pod to test consume secrets May 25 22:25:00.064: INFO: Waiting up to 5m0s for pod "pod-secrets-531cc620-3554-4c81-b7ea-004e53115b61" in namespace "secrets-7272" to be "success or failure" May 25 22:25:00.107: INFO: Pod "pod-secrets-531cc620-3554-4c81-b7ea-004e53115b61": Phase="Pending", Reason="", readiness=false. Elapsed: 42.425005ms May 25 22:25:02.137: INFO: Pod "pod-secrets-531cc620-3554-4c81-b7ea-004e53115b61": Phase="Pending", Reason="", readiness=false. Elapsed: 2.072351705s May 25 22:25:04.141: INFO: Pod "pod-secrets-531cc620-3554-4c81-b7ea-004e53115b61": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.076321282s STEP: Saw pod success May 25 22:25:04.141: INFO: Pod "pod-secrets-531cc620-3554-4c81-b7ea-004e53115b61" satisfied condition "success or failure" May 25 22:25:04.144: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-531cc620-3554-4c81-b7ea-004e53115b61 container secret-volume-test: STEP: delete the pod May 25 22:25:04.259: INFO: Waiting for pod pod-secrets-531cc620-3554-4c81-b7ea-004e53115b61 to disappear May 25 22:25:04.278: INFO: Pod pod-secrets-531cc620-3554-4c81-b7ea-004e53115b61 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 22:25:04.278: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7272" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":272,"skipped":4431,"failed":0} ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 22:25:04.285: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name s-test-opt-del-b8112a79-d035-4c6c-8145-1afea8d84a08 STEP: Creating secret with name s-test-opt-upd-6de63003-3066-41ee-9469-2ffecf430c27 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-b8112a79-d035-4c6c-8145-1afea8d84a08 STEP: Updating secret s-test-opt-upd-6de63003-3066-41ee-9469-2ffecf430c27 STEP: Creating secret with name s-test-opt-create-a539723f-892b-44f3-b897-7433d378e422 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 22:25:14.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9531" for this suite. • [SLOW TEST:10.257 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":273,"skipped":4431,"failed":0} SSSSSSSSS ------------------------------ [sig-network] DNS should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 22:25:14.543: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... May 25 22:25:14.623: INFO: Created pod &Pod{ObjectMeta:{dns-5222 dns-5222 /api/v1/namespaces/dns-5222/pods/dns-5222 6f61068f-3b10-4c77-883e-14556cf52d10 19137555 0 2020-05-25 22:25:14 +0000 UTC map[] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rvvzb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rvvzb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rvvzb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: Verifying customized DNS suffix list is configured on pod... May 25 22:25:18.697: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-5222 PodName:dns-5222 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 25 22:25:18.697: INFO: >>> kubeConfig: /root/.kube/config I0525 22:25:18.730151 6 log.go:172] (0xc001d908f0) (0xc002834500) Create stream I0525 22:25:18.730181 6 log.go:172] (0xc001d908f0) (0xc002834500) Stream added, broadcasting: 1 I0525 22:25:18.733001 6 log.go:172] (0xc001d908f0) Reply frame received for 1 I0525 22:25:18.733049 6 log.go:172] (0xc001d908f0) (0xc001a8e140) Create stream I0525 22:25:18.733065 6 log.go:172] (0xc001d908f0) (0xc001a8e140) Stream added, broadcasting: 3 I0525 22:25:18.734127 6 log.go:172] (0xc001d908f0) Reply frame received for 3 I0525 22:25:18.734155 6 log.go:172] (0xc001d908f0) (0xc0028345a0) Create stream I0525 22:25:18.734167 6 log.go:172] (0xc001d908f0) (0xc0028345a0) Stream added, broadcasting: 5 I0525 22:25:18.735029 6 log.go:172] (0xc001d908f0) Reply frame received for 5 I0525 22:25:18.817441 6 log.go:172] (0xc001d908f0) Data frame received for 3 I0525 22:25:18.817477 6 log.go:172] (0xc001a8e140) (3) Data frame handling I0525 22:25:18.817499 6 log.go:172] (0xc001a8e140) (3) Data frame sent I0525 22:25:18.818826 6 log.go:172] (0xc001d908f0) Data frame received for 3 I0525 22:25:18.818847 6 log.go:172] (0xc001d908f0) Data frame received for 5 I0525 22:25:18.818874 6 log.go:172] (0xc0028345a0) (5) Data frame handling I0525 22:25:18.818893 6 log.go:172] (0xc001a8e140) (3) Data frame handling I0525 22:25:18.820455 6 log.go:172] (0xc001d908f0) Data frame received for 1 I0525 22:25:18.820470 6 log.go:172] (0xc002834500) (1) Data frame handling I0525 22:25:18.820477 6 log.go:172] (0xc002834500) (1) Data frame sent I0525 22:25:18.820485 6 log.go:172] (0xc001d908f0) (0xc002834500) Stream removed, broadcasting: 1 I0525 22:25:18.820499 6 log.go:172] (0xc001d908f0) Go away received I0525 22:25:18.820614 6 log.go:172] (0xc001d908f0) (0xc002834500) Stream removed, broadcasting: 1 I0525 22:25:18.820643 6 log.go:172] (0xc001d908f0) (0xc001a8e140) Stream removed, broadcasting: 3 I0525 22:25:18.820652 6 log.go:172] (0xc001d908f0) (0xc0028345a0) Stream removed, broadcasting: 5 STEP: Verifying customized DNS server is configured on pod... May 25 22:25:18.820: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-5222 PodName:dns-5222 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 25 22:25:18.820: INFO: >>> kubeConfig: /root/.kube/config I0525 22:25:18.848152 6 log.go:172] (0xc0016e0370) (0xc001a8e6e0) Create stream I0525 22:25:18.848179 6 log.go:172] (0xc0016e0370) (0xc001a8e6e0) Stream added, broadcasting: 1 I0525 22:25:18.850207 6 log.go:172] (0xc0016e0370) Reply frame received for 1 I0525 22:25:18.850251 6 log.go:172] (0xc0016e0370) (0xc001a0d5e0) Create stream I0525 22:25:18.850267 6 log.go:172] (0xc0016e0370) (0xc001a0d5e0) Stream added, broadcasting: 3 I0525 22:25:18.851211 6 log.go:172] (0xc0016e0370) Reply frame received for 3 I0525 22:25:18.851238 6 log.go:172] (0xc0016e0370) (0xc002834640) Create stream I0525 22:25:18.851249 6 log.go:172] (0xc0016e0370) (0xc002834640) Stream added, broadcasting: 5 I0525 22:25:18.852292 6 log.go:172] (0xc0016e0370) Reply frame received for 5 I0525 22:25:18.932902 6 log.go:172] (0xc0016e0370) Data frame received for 3 I0525 22:25:18.932956 6 log.go:172] (0xc001a0d5e0) (3) Data frame handling I0525 22:25:18.932995 6 log.go:172] (0xc001a0d5e0) (3) Data frame sent I0525 22:25:18.935164 6 log.go:172] (0xc0016e0370) Data frame received for 3 I0525 22:25:18.935190 6 log.go:172] (0xc001a0d5e0) (3) Data frame handling I0525 22:25:18.935424 6 log.go:172] (0xc0016e0370) Data frame received for 5 I0525 22:25:18.935448 6 log.go:172] (0xc002834640) (5) Data frame handling I0525 22:25:18.937547 6 log.go:172] (0xc0016e0370) Data frame received for 1 I0525 22:25:18.937566 6 log.go:172] (0xc001a8e6e0) (1) Data frame handling I0525 22:25:18.937576 6 log.go:172] (0xc001a8e6e0) (1) Data frame sent I0525 22:25:18.937586 6 log.go:172] (0xc0016e0370) (0xc001a8e6e0) Stream removed, broadcasting: 1 I0525 22:25:18.937598 6 log.go:172] (0xc0016e0370) Go away received I0525 22:25:18.937806 6 log.go:172] (0xc0016e0370) (0xc001a8e6e0) Stream removed, broadcasting: 1 I0525 22:25:18.937835 6 log.go:172] (0xc0016e0370) (0xc001a0d5e0) Stream removed, broadcasting: 3 I0525 22:25:18.937847 6 log.go:172] (0xc0016e0370) (0xc002834640) Stream removed, broadcasting: 5 May 25 22:25:18.937: INFO: Deleting pod dns-5222... [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 22:25:18.957: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5222" for this suite. •{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":278,"completed":274,"skipped":4440,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 22:25:18.992: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 22:25:35.481: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9168" for this suite. • [SLOW TEST:16.497 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":278,"completed":275,"skipped":4450,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 22:25:35.490: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-a1338335-408b-4517-9d03-2b73815d76b3 STEP: Creating a pod to test consume secrets May 25 22:25:35.772: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-e71e5535-98d3-4883-b3b7-52faf5822e15" in namespace "projected-9595" to be "success or failure" May 25 22:25:35.794: INFO: Pod "pod-projected-secrets-e71e5535-98d3-4883-b3b7-52faf5822e15": Phase="Pending", Reason="", readiness=false. Elapsed: 21.957121ms May 25 22:25:37.982: INFO: Pod "pod-projected-secrets-e71e5535-98d3-4883-b3b7-52faf5822e15": Phase="Pending", Reason="", readiness=false. Elapsed: 2.21019002s May 25 22:25:39.987: INFO: Pod "pod-projected-secrets-e71e5535-98d3-4883-b3b7-52faf5822e15": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.214743843s STEP: Saw pod success May 25 22:25:39.987: INFO: Pod "pod-projected-secrets-e71e5535-98d3-4883-b3b7-52faf5822e15" satisfied condition "success or failure" May 25 22:25:39.990: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-e71e5535-98d3-4883-b3b7-52faf5822e15 container projected-secret-volume-test: STEP: delete the pod May 25 22:25:40.012: INFO: Waiting for pod pod-projected-secrets-e71e5535-98d3-4883-b3b7-52faf5822e15 to disappear May 25 22:25:40.016: INFO: Pod pod-projected-secrets-e71e5535-98d3-4883-b3b7-52faf5822e15 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 22:25:40.016: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9595" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":276,"skipped":4512,"failed":0} S ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 22:25:40.024: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on tmpfs May 25 22:25:40.116: INFO: Waiting up to 5m0s for pod "pod-3ff63747-b7ce-4ee7-af78-a6260825d122" in namespace "emptydir-4598" to be "success or failure" May 25 22:25:40.130: INFO: Pod "pod-3ff63747-b7ce-4ee7-af78-a6260825d122": Phase="Pending", Reason="", readiness=false. Elapsed: 14.661439ms May 25 22:25:42.263: INFO: Pod "pod-3ff63747-b7ce-4ee7-af78-a6260825d122": Phase="Pending", Reason="", readiness=false. Elapsed: 2.1474478s May 25 22:25:44.268: INFO: Pod "pod-3ff63747-b7ce-4ee7-af78-a6260825d122": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.151758378s STEP: Saw pod success May 25 22:25:44.268: INFO: Pod "pod-3ff63747-b7ce-4ee7-af78-a6260825d122" satisfied condition "success or failure" May 25 22:25:44.271: INFO: Trying to get logs from node jerma-worker2 pod pod-3ff63747-b7ce-4ee7-af78-a6260825d122 container test-container: STEP: delete the pod May 25 22:25:44.328: INFO: Waiting for pod pod-3ff63747-b7ce-4ee7-af78-a6260825d122 to disappear May 25 22:25:44.365: INFO: Pod pod-3ff63747-b7ce-4ee7-af78-a6260825d122 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 22:25:44.365: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4598" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":277,"skipped":4513,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 25 22:25:44.373: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name s-test-opt-del-997fb1df-eec8-4c12-ae02-314c11f0032f STEP: Creating secret with name s-test-opt-upd-3e3663d9-2209-44b5-961f-d7d4af7f81ed STEP: Creating the pod STEP: Deleting secret s-test-opt-del-997fb1df-eec8-4c12-ae02-314c11f0032f STEP: Updating secret s-test-opt-upd-3e3663d9-2209-44b5-961f-d7d4af7f81ed STEP: Creating secret with name s-test-opt-create-cd4bdcb1-023b-4fe5-a6d5-5f7c26e4b9d5 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 25 22:25:52.661: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4966" for this suite. • [SLOW TEST:8.295 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":278,"skipped":4526,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSMay 25 22:25:52.668: INFO: Running AfterSuite actions on all nodes May 25 22:25:52.668: INFO: Running AfterSuite actions on node 1 May 25 22:25:52.668: INFO: Skipping dumping logs from cluster {"msg":"Test Suite completed","total":278,"completed":278,"skipped":4564,"failed":0} Ran 278 of 4842 Specs in 4590.346 seconds SUCCESS! -- 278 Passed | 0 Failed | 0 Pending | 4564 Skipped PASS