I0408 21:06:51.011535 6 test_context.go:419] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0408 21:06:51.011807 6 e2e.go:109] Starting e2e run "feca5b2b-07c1-4797-b0cb-b51a89d18742" on Ginkgo node 1 {"msg":"Test Suite starting","total":278,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1586380009 - Will randomize all specs Will run 278 of 4842 specs Apr 8 21:06:51.078: INFO: >>> kubeConfig: /root/.kube/config Apr 8 21:06:51.082: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Apr 8 21:06:51.102: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Apr 8 21:06:51.128: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Apr 8 21:06:51.128: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 8 21:06:51.128: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Apr 8 21:06:51.139: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Apr 8 21:06:51.139: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Apr 8 21:06:51.139: INFO: e2e test version: v1.17.4 Apr 8 21:06:51.140: INFO: kube-apiserver version: v1.17.2 Apr 8 21:06:51.140: INFO: >>> kubeConfig: /root/.kube/config Apr 8 21:06:51.146: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:06:51.146: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime Apr 8 21:06:51.214: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 8 21:06:55.247: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:06:55.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-6112" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":1,"skipped":24,"failed":0} SSSS ------------------------------ [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:06:55.310: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:46 [It] should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:06:55.345: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-4678" for this suite. •{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":278,"completed":2,"skipped":28,"failed":0} SS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:06:55.362: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test env composition Apr 8 21:06:55.431: INFO: Waiting up to 5m0s for pod "var-expansion-6c90edbc-4af5-4302-a4f0-74bcde522e5f" in namespace "var-expansion-8869" to be "success or failure" Apr 8 21:06:55.442: INFO: Pod "var-expansion-6c90edbc-4af5-4302-a4f0-74bcde522e5f": Phase="Pending", Reason="", readiness=false. Elapsed: 10.983215ms Apr 8 21:06:57.445: INFO: Pod "var-expansion-6c90edbc-4af5-4302-a4f0-74bcde522e5f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014277432s Apr 8 21:06:59.450: INFO: Pod "var-expansion-6c90edbc-4af5-4302-a4f0-74bcde522e5f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018488307s STEP: Saw pod success Apr 8 21:06:59.450: INFO: Pod "var-expansion-6c90edbc-4af5-4302-a4f0-74bcde522e5f" satisfied condition "success or failure" Apr 8 21:06:59.453: INFO: Trying to get logs from node jerma-worker2 pod var-expansion-6c90edbc-4af5-4302-a4f0-74bcde522e5f container dapi-container: STEP: delete the pod Apr 8 21:06:59.497: INFO: Waiting for pod var-expansion-6c90edbc-4af5-4302-a4f0-74bcde522e5f to disappear Apr 8 21:06:59.508: INFO: Pod var-expansion-6c90edbc-4af5-4302-a4f0-74bcde522e5f no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:06:59.508: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-8869" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":278,"completed":3,"skipped":30,"failed":0} SSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:06:59.516: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir volume type on node default medium Apr 8 21:06:59.570: INFO: Waiting up to 5m0s for pod "pod-916e03b0-867c-41e5-bfe2-f7723bd7f10e" in namespace "emptydir-3758" to be "success or failure" Apr 8 21:06:59.595: INFO: Pod "pod-916e03b0-867c-41e5-bfe2-f7723bd7f10e": Phase="Pending", Reason="", readiness=false. Elapsed: 25.562763ms Apr 8 21:07:01.600: INFO: Pod "pod-916e03b0-867c-41e5-bfe2-f7723bd7f10e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03003487s Apr 8 21:07:03.604: INFO: Pod "pod-916e03b0-867c-41e5-bfe2-f7723bd7f10e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034494381s STEP: Saw pod success Apr 8 21:07:03.604: INFO: Pod "pod-916e03b0-867c-41e5-bfe2-f7723bd7f10e" satisfied condition "success or failure" Apr 8 21:07:03.607: INFO: Trying to get logs from node jerma-worker pod pod-916e03b0-867c-41e5-bfe2-f7723bd7f10e container test-container: STEP: delete the pod Apr 8 21:07:03.639: INFO: Waiting for pod pod-916e03b0-867c-41e5-bfe2-f7723bd7f10e to disappear Apr 8 21:07:03.685: INFO: Pod pod-916e03b0-867c-41e5-bfe2-f7723bd7f10e no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:07:03.685: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3758" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":4,"skipped":36,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:07:03.693: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:07:03.851: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-6114" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":278,"completed":5,"skipped":47,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:07:03.871: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override command Apr 8 21:07:03.920: INFO: Waiting up to 5m0s for pod "client-containers-ee4bbc70-73b8-42de-95e9-6913d1aa6f36" in namespace "containers-3640" to be "success or failure" Apr 8 21:07:03.966: INFO: Pod "client-containers-ee4bbc70-73b8-42de-95e9-6913d1aa6f36": Phase="Pending", Reason="", readiness=false. Elapsed: 46.692895ms Apr 8 21:07:05.971: INFO: Pod "client-containers-ee4bbc70-73b8-42de-95e9-6913d1aa6f36": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051109042s Apr 8 21:07:07.975: INFO: Pod "client-containers-ee4bbc70-73b8-42de-95e9-6913d1aa6f36": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.055106754s STEP: Saw pod success Apr 8 21:07:07.975: INFO: Pod "client-containers-ee4bbc70-73b8-42de-95e9-6913d1aa6f36" satisfied condition "success or failure" Apr 8 21:07:07.978: INFO: Trying to get logs from node jerma-worker pod client-containers-ee4bbc70-73b8-42de-95e9-6913d1aa6f36 container test-container: STEP: delete the pod Apr 8 21:07:07.999: INFO: Waiting for pod client-containers-ee4bbc70-73b8-42de-95e9-6913d1aa6f36 to disappear Apr 8 21:07:08.003: INFO: Pod client-containers-ee4bbc70-73b8-42de-95e9-6913d1aa6f36 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:07:08.003: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-3640" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":278,"completed":6,"skipped":90,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:07:08.012: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Apr 8 21:07:08.070: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 8 21:07:08.082: INFO: Waiting for terminating namespaces to be deleted... Apr 8 21:07:08.085: INFO: Logging pods the kubelet thinks is on node jerma-worker before test Apr 8 21:07:08.091: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 8 21:07:08.091: INFO: Container kube-proxy ready: true, restart count 0 Apr 8 21:07:08.091: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 8 21:07:08.091: INFO: Container kindnet-cni ready: true, restart count 0 Apr 8 21:07:08.091: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test Apr 8 21:07:08.096: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) Apr 8 21:07:08.096: INFO: Container kube-bench ready: false, restart count 0 Apr 8 21:07:08.096: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 8 21:07:08.096: INFO: Container kindnet-cni ready: true, restart count 0 Apr 8 21:07:08.096: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 8 21:07:08.096: INFO: Container kube-proxy ready: true, restart count 0 Apr 8 21:07:08.096: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) Apr 8 21:07:08.096: INFO: Container kube-hunter ready: false, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-93a5f205-003a-410b-93e6-52a5f85452d7 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-93a5f205-003a-410b-93e6-52a5f85452d7 off the node jerma-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-93a5f205-003a-410b-93e6-52a5f85452d7 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:12:16.287: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-7307" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:308.317 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":278,"completed":7,"skipped":121,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:12:16.330: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on tmpfs Apr 8 21:12:16.388: INFO: Waiting up to 5m0s for pod "pod-2891152b-e3f5-4e35-b7ad-415b3138c24b" in namespace "emptydir-1702" to be "success or failure" Apr 8 21:12:16.404: INFO: Pod "pod-2891152b-e3f5-4e35-b7ad-415b3138c24b": Phase="Pending", Reason="", readiness=false. Elapsed: 15.880057ms Apr 8 21:12:18.408: INFO: Pod "pod-2891152b-e3f5-4e35-b7ad-415b3138c24b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01951539s Apr 8 21:12:20.412: INFO: Pod "pod-2891152b-e3f5-4e35-b7ad-415b3138c24b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.0233953s STEP: Saw pod success Apr 8 21:12:20.412: INFO: Pod "pod-2891152b-e3f5-4e35-b7ad-415b3138c24b" satisfied condition "success or failure" Apr 8 21:12:20.415: INFO: Trying to get logs from node jerma-worker pod pod-2891152b-e3f5-4e35-b7ad-415b3138c24b container test-container: STEP: delete the pod Apr 8 21:12:20.460: INFO: Waiting for pod pod-2891152b-e3f5-4e35-b7ad-415b3138c24b to disappear Apr 8 21:12:20.470: INFO: Pod pod-2891152b-e3f5-4e35-b7ad-415b3138c24b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:12:20.470: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1702" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":8,"skipped":145,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:12:20.477: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-7006.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-7006.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-7006.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7006.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-7006.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-7006.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-7006.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-7006.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7006.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-7006.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-7006.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-7006.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-7006.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-7006.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-7006.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-7006.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-7006.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7006.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 8 21:12:26.655: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-7006.svc.cluster.local from pod dns-7006/dns-test-af597935-06c2-40fe-b251-a4901d17836a: the server could not find the requested resource (get pods dns-test-af597935-06c2-40fe-b251-a4901d17836a) Apr 8 21:12:26.659: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7006.svc.cluster.local from pod dns-7006/dns-test-af597935-06c2-40fe-b251-a4901d17836a: the server could not find the requested resource (get pods dns-test-af597935-06c2-40fe-b251-a4901d17836a) Apr 8 21:12:26.663: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-7006.svc.cluster.local from pod dns-7006/dns-test-af597935-06c2-40fe-b251-a4901d17836a: the server could not find the requested resource (get pods dns-test-af597935-06c2-40fe-b251-a4901d17836a) Apr 8 21:12:26.666: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-7006.svc.cluster.local from pod dns-7006/dns-test-af597935-06c2-40fe-b251-a4901d17836a: the server could not find the requested resource (get pods dns-test-af597935-06c2-40fe-b251-a4901d17836a) Apr 8 21:12:26.677: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-7006.svc.cluster.local from pod dns-7006/dns-test-af597935-06c2-40fe-b251-a4901d17836a: the server could not find the requested resource (get pods dns-test-af597935-06c2-40fe-b251-a4901d17836a) Apr 8 21:12:26.681: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-7006.svc.cluster.local from pod dns-7006/dns-test-af597935-06c2-40fe-b251-a4901d17836a: the server could not find the requested resource (get pods dns-test-af597935-06c2-40fe-b251-a4901d17836a) Apr 8 21:12:26.684: INFO: Unable to read jessie_udp@dns-test-service-2.dns-7006.svc.cluster.local from pod dns-7006/dns-test-af597935-06c2-40fe-b251-a4901d17836a: the server could not find the requested resource (get pods dns-test-af597935-06c2-40fe-b251-a4901d17836a) Apr 8 21:12:26.688: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-7006.svc.cluster.local from pod dns-7006/dns-test-af597935-06c2-40fe-b251-a4901d17836a: the server could not find the requested resource (get pods dns-test-af597935-06c2-40fe-b251-a4901d17836a) Apr 8 21:12:26.695: INFO: Lookups using dns-7006/dns-test-af597935-06c2-40fe-b251-a4901d17836a failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-7006.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7006.svc.cluster.local wheezy_udp@dns-test-service-2.dns-7006.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-7006.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-7006.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-7006.svc.cluster.local jessie_udp@dns-test-service-2.dns-7006.svc.cluster.local jessie_tcp@dns-test-service-2.dns-7006.svc.cluster.local] Apr 8 21:12:31.700: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-7006.svc.cluster.local from pod dns-7006/dns-test-af597935-06c2-40fe-b251-a4901d17836a: the server could not find the requested resource (get pods dns-test-af597935-06c2-40fe-b251-a4901d17836a) Apr 8 21:12:31.704: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7006.svc.cluster.local from pod dns-7006/dns-test-af597935-06c2-40fe-b251-a4901d17836a: the server could not find the requested resource (get pods dns-test-af597935-06c2-40fe-b251-a4901d17836a) Apr 8 21:12:31.707: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-7006.svc.cluster.local from pod dns-7006/dns-test-af597935-06c2-40fe-b251-a4901d17836a: the server could not find the requested resource (get pods dns-test-af597935-06c2-40fe-b251-a4901d17836a) Apr 8 21:12:31.709: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-7006.svc.cluster.local from pod dns-7006/dns-test-af597935-06c2-40fe-b251-a4901d17836a: the server could not find the requested resource (get pods dns-test-af597935-06c2-40fe-b251-a4901d17836a) Apr 8 21:12:31.717: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-7006.svc.cluster.local from pod dns-7006/dns-test-af597935-06c2-40fe-b251-a4901d17836a: the server could not find the requested resource (get pods dns-test-af597935-06c2-40fe-b251-a4901d17836a) Apr 8 21:12:31.720: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-7006.svc.cluster.local from pod dns-7006/dns-test-af597935-06c2-40fe-b251-a4901d17836a: the server could not find the requested resource (get pods dns-test-af597935-06c2-40fe-b251-a4901d17836a) Apr 8 21:12:31.722: INFO: Unable to read jessie_udp@dns-test-service-2.dns-7006.svc.cluster.local from pod dns-7006/dns-test-af597935-06c2-40fe-b251-a4901d17836a: the server could not find the requested resource (get pods dns-test-af597935-06c2-40fe-b251-a4901d17836a) Apr 8 21:12:31.725: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-7006.svc.cluster.local from pod dns-7006/dns-test-af597935-06c2-40fe-b251-a4901d17836a: the server could not find the requested resource (get pods dns-test-af597935-06c2-40fe-b251-a4901d17836a) Apr 8 21:12:31.730: INFO: Lookups using dns-7006/dns-test-af597935-06c2-40fe-b251-a4901d17836a failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-7006.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7006.svc.cluster.local wheezy_udp@dns-test-service-2.dns-7006.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-7006.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-7006.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-7006.svc.cluster.local jessie_udp@dns-test-service-2.dns-7006.svc.cluster.local jessie_tcp@dns-test-service-2.dns-7006.svc.cluster.local] Apr 8 21:12:36.700: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-7006.svc.cluster.local from pod dns-7006/dns-test-af597935-06c2-40fe-b251-a4901d17836a: the server could not find the requested resource (get pods dns-test-af597935-06c2-40fe-b251-a4901d17836a) Apr 8 21:12:36.709: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7006.svc.cluster.local from pod dns-7006/dns-test-af597935-06c2-40fe-b251-a4901d17836a: the server could not find the requested resource (get pods dns-test-af597935-06c2-40fe-b251-a4901d17836a) Apr 8 21:12:36.714: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-7006.svc.cluster.local from pod dns-7006/dns-test-af597935-06c2-40fe-b251-a4901d17836a: the server could not find the requested resource (get pods dns-test-af597935-06c2-40fe-b251-a4901d17836a) Apr 8 21:12:36.718: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-7006.svc.cluster.local from pod dns-7006/dns-test-af597935-06c2-40fe-b251-a4901d17836a: the server could not find the requested resource (get pods dns-test-af597935-06c2-40fe-b251-a4901d17836a) Apr 8 21:12:36.724: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-7006.svc.cluster.local from pod dns-7006/dns-test-af597935-06c2-40fe-b251-a4901d17836a: the server could not find the requested resource (get pods dns-test-af597935-06c2-40fe-b251-a4901d17836a) Apr 8 21:12:36.726: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-7006.svc.cluster.local from pod dns-7006/dns-test-af597935-06c2-40fe-b251-a4901d17836a: the server could not find the requested resource (get pods dns-test-af597935-06c2-40fe-b251-a4901d17836a) Apr 8 21:12:36.728: INFO: Unable to read jessie_udp@dns-test-service-2.dns-7006.svc.cluster.local from pod dns-7006/dns-test-af597935-06c2-40fe-b251-a4901d17836a: the server could not find the requested resource (get pods dns-test-af597935-06c2-40fe-b251-a4901d17836a) Apr 8 21:12:36.730: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-7006.svc.cluster.local from pod dns-7006/dns-test-af597935-06c2-40fe-b251-a4901d17836a: the server could not find the requested resource (get pods dns-test-af597935-06c2-40fe-b251-a4901d17836a) Apr 8 21:12:36.734: INFO: Lookups using dns-7006/dns-test-af597935-06c2-40fe-b251-a4901d17836a failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-7006.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7006.svc.cluster.local wheezy_udp@dns-test-service-2.dns-7006.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-7006.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-7006.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-7006.svc.cluster.local jessie_udp@dns-test-service-2.dns-7006.svc.cluster.local jessie_tcp@dns-test-service-2.dns-7006.svc.cluster.local] Apr 8 21:12:41.700: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-7006.svc.cluster.local from pod dns-7006/dns-test-af597935-06c2-40fe-b251-a4901d17836a: the server could not find the requested resource (get pods dns-test-af597935-06c2-40fe-b251-a4901d17836a) Apr 8 21:12:41.704: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7006.svc.cluster.local from pod dns-7006/dns-test-af597935-06c2-40fe-b251-a4901d17836a: the server could not find the requested resource (get pods dns-test-af597935-06c2-40fe-b251-a4901d17836a) Apr 8 21:12:41.707: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-7006.svc.cluster.local from pod dns-7006/dns-test-af597935-06c2-40fe-b251-a4901d17836a: the server could not find the requested resource (get pods dns-test-af597935-06c2-40fe-b251-a4901d17836a) Apr 8 21:12:41.712: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-7006.svc.cluster.local from pod dns-7006/dns-test-af597935-06c2-40fe-b251-a4901d17836a: the server could not find the requested resource (get pods dns-test-af597935-06c2-40fe-b251-a4901d17836a) Apr 8 21:12:41.721: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-7006.svc.cluster.local from pod dns-7006/dns-test-af597935-06c2-40fe-b251-a4901d17836a: the server could not find the requested resource (get pods dns-test-af597935-06c2-40fe-b251-a4901d17836a) Apr 8 21:12:41.724: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-7006.svc.cluster.local from pod dns-7006/dns-test-af597935-06c2-40fe-b251-a4901d17836a: the server could not find the requested resource (get pods dns-test-af597935-06c2-40fe-b251-a4901d17836a) Apr 8 21:12:41.727: INFO: Unable to read jessie_udp@dns-test-service-2.dns-7006.svc.cluster.local from pod dns-7006/dns-test-af597935-06c2-40fe-b251-a4901d17836a: the server could not find the requested resource (get pods dns-test-af597935-06c2-40fe-b251-a4901d17836a) Apr 8 21:12:41.730: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-7006.svc.cluster.local from pod dns-7006/dns-test-af597935-06c2-40fe-b251-a4901d17836a: the server could not find the requested resource (get pods dns-test-af597935-06c2-40fe-b251-a4901d17836a) Apr 8 21:12:41.736: INFO: Lookups using dns-7006/dns-test-af597935-06c2-40fe-b251-a4901d17836a failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-7006.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7006.svc.cluster.local wheezy_udp@dns-test-service-2.dns-7006.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-7006.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-7006.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-7006.svc.cluster.local jessie_udp@dns-test-service-2.dns-7006.svc.cluster.local jessie_tcp@dns-test-service-2.dns-7006.svc.cluster.local] Apr 8 21:12:46.700: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-7006.svc.cluster.local from pod dns-7006/dns-test-af597935-06c2-40fe-b251-a4901d17836a: the server could not find the requested resource (get pods dns-test-af597935-06c2-40fe-b251-a4901d17836a) Apr 8 21:12:46.703: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7006.svc.cluster.local from pod dns-7006/dns-test-af597935-06c2-40fe-b251-a4901d17836a: the server could not find the requested resource (get pods dns-test-af597935-06c2-40fe-b251-a4901d17836a) Apr 8 21:12:46.707: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-7006.svc.cluster.local from pod dns-7006/dns-test-af597935-06c2-40fe-b251-a4901d17836a: the server could not find the requested resource (get pods dns-test-af597935-06c2-40fe-b251-a4901d17836a) Apr 8 21:12:46.710: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-7006.svc.cluster.local from pod dns-7006/dns-test-af597935-06c2-40fe-b251-a4901d17836a: the server could not find the requested resource (get pods dns-test-af597935-06c2-40fe-b251-a4901d17836a) Apr 8 21:12:46.718: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-7006.svc.cluster.local from pod dns-7006/dns-test-af597935-06c2-40fe-b251-a4901d17836a: the server could not find the requested resource (get pods dns-test-af597935-06c2-40fe-b251-a4901d17836a) Apr 8 21:12:46.720: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-7006.svc.cluster.local from pod dns-7006/dns-test-af597935-06c2-40fe-b251-a4901d17836a: the server could not find the requested resource (get pods dns-test-af597935-06c2-40fe-b251-a4901d17836a) Apr 8 21:12:46.723: INFO: Unable to read jessie_udp@dns-test-service-2.dns-7006.svc.cluster.local from pod dns-7006/dns-test-af597935-06c2-40fe-b251-a4901d17836a: the server could not find the requested resource (get pods dns-test-af597935-06c2-40fe-b251-a4901d17836a) Apr 8 21:12:46.726: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-7006.svc.cluster.local from pod dns-7006/dns-test-af597935-06c2-40fe-b251-a4901d17836a: the server could not find the requested resource (get pods dns-test-af597935-06c2-40fe-b251-a4901d17836a) Apr 8 21:12:46.731: INFO: Lookups using dns-7006/dns-test-af597935-06c2-40fe-b251-a4901d17836a failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-7006.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7006.svc.cluster.local wheezy_udp@dns-test-service-2.dns-7006.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-7006.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-7006.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-7006.svc.cluster.local jessie_udp@dns-test-service-2.dns-7006.svc.cluster.local jessie_tcp@dns-test-service-2.dns-7006.svc.cluster.local] Apr 8 21:12:51.700: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-7006.svc.cluster.local from pod dns-7006/dns-test-af597935-06c2-40fe-b251-a4901d17836a: the server could not find the requested resource (get pods dns-test-af597935-06c2-40fe-b251-a4901d17836a) Apr 8 21:12:51.704: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7006.svc.cluster.local from pod dns-7006/dns-test-af597935-06c2-40fe-b251-a4901d17836a: the server could not find the requested resource (get pods dns-test-af597935-06c2-40fe-b251-a4901d17836a) Apr 8 21:12:51.708: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-7006.svc.cluster.local from pod dns-7006/dns-test-af597935-06c2-40fe-b251-a4901d17836a: the server could not find the requested resource (get pods dns-test-af597935-06c2-40fe-b251-a4901d17836a) Apr 8 21:12:51.711: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-7006.svc.cluster.local from pod dns-7006/dns-test-af597935-06c2-40fe-b251-a4901d17836a: the server could not find the requested resource (get pods dns-test-af597935-06c2-40fe-b251-a4901d17836a) Apr 8 21:12:51.720: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-7006.svc.cluster.local from pod dns-7006/dns-test-af597935-06c2-40fe-b251-a4901d17836a: the server could not find the requested resource (get pods dns-test-af597935-06c2-40fe-b251-a4901d17836a) Apr 8 21:12:51.723: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-7006.svc.cluster.local from pod dns-7006/dns-test-af597935-06c2-40fe-b251-a4901d17836a: the server could not find the requested resource (get pods dns-test-af597935-06c2-40fe-b251-a4901d17836a) Apr 8 21:12:51.726: INFO: Unable to read jessie_udp@dns-test-service-2.dns-7006.svc.cluster.local from pod dns-7006/dns-test-af597935-06c2-40fe-b251-a4901d17836a: the server could not find the requested resource (get pods dns-test-af597935-06c2-40fe-b251-a4901d17836a) Apr 8 21:12:51.728: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-7006.svc.cluster.local from pod dns-7006/dns-test-af597935-06c2-40fe-b251-a4901d17836a: the server could not find the requested resource (get pods dns-test-af597935-06c2-40fe-b251-a4901d17836a) Apr 8 21:12:51.734: INFO: Lookups using dns-7006/dns-test-af597935-06c2-40fe-b251-a4901d17836a failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-7006.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7006.svc.cluster.local wheezy_udp@dns-test-service-2.dns-7006.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-7006.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-7006.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-7006.svc.cluster.local jessie_udp@dns-test-service-2.dns-7006.svc.cluster.local jessie_tcp@dns-test-service-2.dns-7006.svc.cluster.local] Apr 8 21:12:56.735: INFO: DNS probes using dns-7006/dns-test-af597935-06c2-40fe-b251-a4901d17836a succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:12:56.840: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-7006" for this suite. • [SLOW TEST:36.494 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":278,"completed":9,"skipped":178,"failed":0} SS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:12:56.971: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 8 21:12:57.279: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Apr 8 21:13:00.214: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7263 create -f -' Apr 8 21:13:02.814: INFO: stderr: "" Apr 8 21:13:02.814: INFO: stdout: "e2e-test-crd-publish-openapi-1109-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Apr 8 21:13:02.815: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7263 delete e2e-test-crd-publish-openapi-1109-crds test-cr' Apr 8 21:13:02.924: INFO: stderr: "" Apr 8 21:13:02.924: INFO: stdout: "e2e-test-crd-publish-openapi-1109-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" Apr 8 21:13:02.925: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7263 apply -f -' Apr 8 21:13:03.162: INFO: stderr: "" Apr 8 21:13:03.162: INFO: stdout: "e2e-test-crd-publish-openapi-1109-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Apr 8 21:13:03.162: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7263 delete e2e-test-crd-publish-openapi-1109-crds test-cr' Apr 8 21:13:03.273: INFO: stderr: "" Apr 8 21:13:03.273: INFO: stdout: "e2e-test-crd-publish-openapi-1109-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema Apr 8 21:13:03.273: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-1109-crds' Apr 8 21:13:03.504: INFO: stderr: "" Apr 8 21:13:03.505: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-1109-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:13:06.448: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-7263" for this suite. • [SLOW TEST:9.485 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":278,"completed":10,"skipped":180,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:13:06.457: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 8 21:13:10.571: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:13:10.585: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-459" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":278,"completed":11,"skipped":205,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:13:10.592: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 8 21:13:10.708: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR Apr 8 21:13:11.289: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-04-08T21:13:11Z generation:1 name:name1 resourceVersion:6501150 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:d9043a90-f6bb-4540-a987-2ddaaa37c46b] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR Apr 8 21:13:21.294: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-04-08T21:13:21Z generation:1 name:name2 resourceVersion:6501193 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:963d70c1-6794-4244-bd6d-b7e350209339] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR Apr 8 21:13:31.301: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-04-08T21:13:11Z generation:2 name:name1 resourceVersion:6501223 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:d9043a90-f6bb-4540-a987-2ddaaa37c46b] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR Apr 8 21:13:41.306: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-04-08T21:13:21Z generation:2 name:name2 resourceVersion:6501253 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:963d70c1-6794-4244-bd6d-b7e350209339] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR Apr 8 21:13:51.329: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-04-08T21:13:11Z generation:2 name:name1 resourceVersion:6501283 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:d9043a90-f6bb-4540-a987-2ddaaa37c46b] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR Apr 8 21:14:01.337: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-04-08T21:13:21Z generation:2 name:name2 resourceVersion:6501311 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:963d70c1-6794-4244-bd6d-b7e350209339] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:14:11.848: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-9789" for this suite. • [SLOW TEST:61.265 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:41 watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":278,"completed":12,"skipped":220,"failed":0} S ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:14:11.857: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 8 21:14:11.916: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Apr 8 21:14:14.868: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1426 create -f -' Apr 8 21:14:17.664: INFO: stderr: "" Apr 8 21:14:17.664: INFO: stdout: "e2e-test-crd-publish-openapi-9684-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Apr 8 21:14:17.664: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1426 delete e2e-test-crd-publish-openapi-9684-crds test-cr' Apr 8 21:14:17.764: INFO: stderr: "" Apr 8 21:14:17.764: INFO: stdout: "e2e-test-crd-publish-openapi-9684-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" Apr 8 21:14:17.764: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1426 apply -f -' Apr 8 21:14:18.012: INFO: stderr: "" Apr 8 21:14:18.012: INFO: stdout: "e2e-test-crd-publish-openapi-9684-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Apr 8 21:14:18.012: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1426 delete e2e-test-crd-publish-openapi-9684-crds test-cr' Apr 8 21:14:18.112: INFO: stderr: "" Apr 8 21:14:18.112: INFO: stdout: "e2e-test-crd-publish-openapi-9684-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Apr 8 21:14:18.112: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9684-crds' Apr 8 21:14:18.348: INFO: stderr: "" Apr 8 21:14:18.348: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-9684-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:14:20.257: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1426" for this suite. • [SLOW TEST:8.407 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":278,"completed":13,"skipped":221,"failed":0} SSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:14:20.264: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override all Apr 8 21:14:20.320: INFO: Waiting up to 5m0s for pod "client-containers-d89cd816-bbe8-4a30-9c32-7a77b222bc12" in namespace "containers-8021" to be "success or failure" Apr 8 21:14:20.362: INFO: Pod "client-containers-d89cd816-bbe8-4a30-9c32-7a77b222bc12": Phase="Pending", Reason="", readiness=false. Elapsed: 42.51805ms Apr 8 21:14:22.367: INFO: Pod "client-containers-d89cd816-bbe8-4a30-9c32-7a77b222bc12": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047026304s Apr 8 21:14:24.371: INFO: Pod "client-containers-d89cd816-bbe8-4a30-9c32-7a77b222bc12": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.050830234s STEP: Saw pod success Apr 8 21:14:24.371: INFO: Pod "client-containers-d89cd816-bbe8-4a30-9c32-7a77b222bc12" satisfied condition "success or failure" Apr 8 21:14:24.373: INFO: Trying to get logs from node jerma-worker pod client-containers-d89cd816-bbe8-4a30-9c32-7a77b222bc12 container test-container: STEP: delete the pod Apr 8 21:14:24.403: INFO: Waiting for pod client-containers-d89cd816-bbe8-4a30-9c32-7a77b222bc12 to disappear Apr 8 21:14:24.408: INFO: Pod client-containers-d89cd816-bbe8-4a30-9c32-7a77b222bc12 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:14:24.408: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-8021" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":278,"completed":14,"skipped":226,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:14:24.418: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on node default medium Apr 8 21:14:24.506: INFO: Waiting up to 5m0s for pod "pod-be9e8583-b7a2-4372-81d6-d43ef9d26faa" in namespace "emptydir-9867" to be "success or failure" Apr 8 21:14:24.523: INFO: Pod "pod-be9e8583-b7a2-4372-81d6-d43ef9d26faa": Phase="Pending", Reason="", readiness=false. Elapsed: 17.285109ms Apr 8 21:14:26.527: INFO: Pod "pod-be9e8583-b7a2-4372-81d6-d43ef9d26faa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021585515s Apr 8 21:14:28.531: INFO: Pod "pod-be9e8583-b7a2-4372-81d6-d43ef9d26faa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025856176s STEP: Saw pod success Apr 8 21:14:28.531: INFO: Pod "pod-be9e8583-b7a2-4372-81d6-d43ef9d26faa" satisfied condition "success or failure" Apr 8 21:14:28.534: INFO: Trying to get logs from node jerma-worker2 pod pod-be9e8583-b7a2-4372-81d6-d43ef9d26faa container test-container: STEP: delete the pod Apr 8 21:14:28.564: INFO: Waiting for pod pod-be9e8583-b7a2-4372-81d6-d43ef9d26faa to disappear Apr 8 21:14:28.590: INFO: Pod pod-be9e8583-b7a2-4372-81d6-d43ef9d26faa no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:14:28.590: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9867" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":15,"skipped":240,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:14:28.598: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 8 21:14:29.055: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 8 21:14:31.085: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721977269, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721977269, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721977269, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721977269, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 8 21:14:34.114: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 8 21:14:34.117: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-6288-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:14:35.261: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1732" for this suite. STEP: Destroying namespace "webhook-1732-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.741 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":278,"completed":16,"skipped":278,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:14:35.339: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 8 21:14:35.385: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1f34fc47-d7f5-4540-9485-15ee359ae3c8" in namespace "downward-api-5883" to be "success or failure" Apr 8 21:14:35.404: INFO: Pod "downwardapi-volume-1f34fc47-d7f5-4540-9485-15ee359ae3c8": Phase="Pending", Reason="", readiness=false. Elapsed: 19.178597ms Apr 8 21:14:37.408: INFO: Pod "downwardapi-volume-1f34fc47-d7f5-4540-9485-15ee359ae3c8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023189815s Apr 8 21:14:39.413: INFO: Pod "downwardapi-volume-1f34fc47-d7f5-4540-9485-15ee359ae3c8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027434428s STEP: Saw pod success Apr 8 21:14:39.413: INFO: Pod "downwardapi-volume-1f34fc47-d7f5-4540-9485-15ee359ae3c8" satisfied condition "success or failure" Apr 8 21:14:39.416: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-1f34fc47-d7f5-4540-9485-15ee359ae3c8 container client-container: STEP: delete the pod Apr 8 21:14:39.439: INFO: Waiting for pod downwardapi-volume-1f34fc47-d7f5-4540-9485-15ee359ae3c8 to disappear Apr 8 21:14:39.444: INFO: Pod downwardapi-volume-1f34fc47-d7f5-4540-9485-15ee359ae3c8 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:14:39.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5883" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":17,"skipped":287,"failed":0} SS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:14:39.452: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 8 21:14:39.547: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:14:45.733: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-8873" for this suite. • [SLOW TEST:6.290 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:47 listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":278,"completed":18,"skipped":289,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:14:45.743: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 8 21:14:46.582: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 8 21:14:48.594: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721977286, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721977286, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721977286, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721977286, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 8 21:14:51.645: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook Apr 8 21:14:51.668: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:14:51.684: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6415" for this suite. STEP: Destroying namespace "webhook-6415-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.076 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":278,"completed":19,"skipped":301,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:14:51.819: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:324 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a replication controller Apr 8 21:14:51.940: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9393' Apr 8 21:14:52.331: INFO: stderr: "" Apr 8 21:14:52.331: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 8 21:14:52.331: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9393' Apr 8 21:14:52.550: INFO: stderr: "" Apr 8 21:14:52.550: INFO: stdout: "update-demo-nautilus-4qsbm update-demo-nautilus-x697w " Apr 8 21:14:52.550: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4qsbm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9393' Apr 8 21:14:52.662: INFO: stderr: "" Apr 8 21:14:52.662: INFO: stdout: "" Apr 8 21:14:52.662: INFO: update-demo-nautilus-4qsbm is created but not running Apr 8 21:14:57.663: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9393' Apr 8 21:14:57.767: INFO: stderr: "" Apr 8 21:14:57.767: INFO: stdout: "update-demo-nautilus-4qsbm update-demo-nautilus-x697w " Apr 8 21:14:57.767: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4qsbm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9393' Apr 8 21:14:57.873: INFO: stderr: "" Apr 8 21:14:57.873: INFO: stdout: "true" Apr 8 21:14:57.873: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4qsbm -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9393' Apr 8 21:14:57.964: INFO: stderr: "" Apr 8 21:14:57.965: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 8 21:14:57.965: INFO: validating pod update-demo-nautilus-4qsbm Apr 8 21:14:57.969: INFO: got data: { "image": "nautilus.jpg" } Apr 8 21:14:57.969: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 8 21:14:57.969: INFO: update-demo-nautilus-4qsbm is verified up and running Apr 8 21:14:57.969: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-x697w -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9393' Apr 8 21:14:58.074: INFO: stderr: "" Apr 8 21:14:58.074: INFO: stdout: "true" Apr 8 21:14:58.074: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-x697w -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9393' Apr 8 21:14:58.169: INFO: stderr: "" Apr 8 21:14:58.169: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 8 21:14:58.169: INFO: validating pod update-demo-nautilus-x697w Apr 8 21:14:58.173: INFO: got data: { "image": "nautilus.jpg" } Apr 8 21:14:58.173: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 8 21:14:58.173: INFO: update-demo-nautilus-x697w is verified up and running STEP: using delete to clean up resources Apr 8 21:14:58.173: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9393' Apr 8 21:14:58.351: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 8 21:14:58.351: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Apr 8 21:14:58.351: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-9393' Apr 8 21:14:58.447: INFO: stderr: "No resources found in kubectl-9393 namespace.\n" Apr 8 21:14:58.447: INFO: stdout: "" Apr 8 21:14:58.447: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-9393 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 8 21:14:58.725: INFO: stderr: "" Apr 8 21:14:58.725: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:14:58.725: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9393" for this suite. • [SLOW TEST:6.985 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:322 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":278,"completed":20,"skipped":328,"failed":0} SS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:14:58.804: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-3393, will wait for the garbage collector to delete the pods Apr 8 21:15:05.114: INFO: Deleting Job.batch foo took: 6.16465ms Apr 8 21:15:05.214: INFO: Terminating Job.batch foo pods took: 100.285456ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:15:39.517: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-3393" for this suite. • [SLOW TEST:40.732 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":278,"completed":21,"skipped":330,"failed":0} SSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:15:39.537: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:324 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a replication controller Apr 8 21:15:39.591: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4711' Apr 8 21:15:39.821: INFO: stderr: "" Apr 8 21:15:39.821: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 8 21:15:39.821: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4711' Apr 8 21:15:39.945: INFO: stderr: "" Apr 8 21:15:39.945: INFO: stdout: "update-demo-nautilus-27l7t update-demo-nautilus-7htgg " Apr 8 21:15:39.945: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-27l7t -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4711' Apr 8 21:15:40.053: INFO: stderr: "" Apr 8 21:15:40.053: INFO: stdout: "" Apr 8 21:15:40.053: INFO: update-demo-nautilus-27l7t is created but not running Apr 8 21:15:45.053: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4711' Apr 8 21:15:45.161: INFO: stderr: "" Apr 8 21:15:45.161: INFO: stdout: "update-demo-nautilus-27l7t update-demo-nautilus-7htgg " Apr 8 21:15:45.161: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-27l7t -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4711' Apr 8 21:15:45.265: INFO: stderr: "" Apr 8 21:15:45.265: INFO: stdout: "true" Apr 8 21:15:45.265: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-27l7t -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4711' Apr 8 21:15:45.363: INFO: stderr: "" Apr 8 21:15:45.363: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 8 21:15:45.363: INFO: validating pod update-demo-nautilus-27l7t Apr 8 21:15:45.367: INFO: got data: { "image": "nautilus.jpg" } Apr 8 21:15:45.367: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 8 21:15:45.367: INFO: update-demo-nautilus-27l7t is verified up and running Apr 8 21:15:45.367: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7htgg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4711' Apr 8 21:15:45.453: INFO: stderr: "" Apr 8 21:15:45.454: INFO: stdout: "true" Apr 8 21:15:45.454: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7htgg -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4711' Apr 8 21:15:45.537: INFO: stderr: "" Apr 8 21:15:45.537: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 8 21:15:45.537: INFO: validating pod update-demo-nautilus-7htgg Apr 8 21:15:45.541: INFO: got data: { "image": "nautilus.jpg" } Apr 8 21:15:45.541: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 8 21:15:45.541: INFO: update-demo-nautilus-7htgg is verified up and running STEP: scaling down the replication controller Apr 8 21:15:45.544: INFO: scanned /root for discovery docs: Apr 8 21:15:45.544: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-4711' Apr 8 21:15:46.672: INFO: stderr: "" Apr 8 21:15:46.673: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 8 21:15:46.673: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4711' Apr 8 21:15:46.781: INFO: stderr: "" Apr 8 21:15:46.781: INFO: stdout: "update-demo-nautilus-27l7t update-demo-nautilus-7htgg " STEP: Replicas for name=update-demo: expected=1 actual=2 Apr 8 21:15:51.781: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4711' Apr 8 21:15:51.896: INFO: stderr: "" Apr 8 21:15:51.896: INFO: stdout: "update-demo-nautilus-27l7t " Apr 8 21:15:51.896: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-27l7t -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4711' Apr 8 21:15:51.983: INFO: stderr: "" Apr 8 21:15:51.983: INFO: stdout: "true" Apr 8 21:15:51.983: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-27l7t -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4711' Apr 8 21:15:52.081: INFO: stderr: "" Apr 8 21:15:52.081: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 8 21:15:52.081: INFO: validating pod update-demo-nautilus-27l7t Apr 8 21:15:52.084: INFO: got data: { "image": "nautilus.jpg" } Apr 8 21:15:52.084: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 8 21:15:52.084: INFO: update-demo-nautilus-27l7t is verified up and running STEP: scaling up the replication controller Apr 8 21:15:52.088: INFO: scanned /root for discovery docs: Apr 8 21:15:52.088: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-4711' Apr 8 21:15:53.213: INFO: stderr: "" Apr 8 21:15:53.213: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 8 21:15:53.213: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4711' Apr 8 21:15:53.310: INFO: stderr: "" Apr 8 21:15:53.310: INFO: stdout: "update-demo-nautilus-27l7t update-demo-nautilus-8wzwc " Apr 8 21:15:53.310: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-27l7t -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4711' Apr 8 21:15:53.409: INFO: stderr: "" Apr 8 21:15:53.409: INFO: stdout: "true" Apr 8 21:15:53.409: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-27l7t -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4711' Apr 8 21:15:53.519: INFO: stderr: "" Apr 8 21:15:53.519: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 8 21:15:53.519: INFO: validating pod update-demo-nautilus-27l7t Apr 8 21:15:53.523: INFO: got data: { "image": "nautilus.jpg" } Apr 8 21:15:53.523: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 8 21:15:53.523: INFO: update-demo-nautilus-27l7t is verified up and running Apr 8 21:15:53.523: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8wzwc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4711' Apr 8 21:15:53.610: INFO: stderr: "" Apr 8 21:15:53.610: INFO: stdout: "" Apr 8 21:15:53.610: INFO: update-demo-nautilus-8wzwc is created but not running Apr 8 21:15:58.610: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4711' Apr 8 21:15:58.723: INFO: stderr: "" Apr 8 21:15:58.723: INFO: stdout: "update-demo-nautilus-27l7t update-demo-nautilus-8wzwc " Apr 8 21:15:58.723: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-27l7t -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4711' Apr 8 21:15:58.824: INFO: stderr: "" Apr 8 21:15:58.824: INFO: stdout: "true" Apr 8 21:15:58.825: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-27l7t -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4711' Apr 8 21:15:58.931: INFO: stderr: "" Apr 8 21:15:58.931: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 8 21:15:58.931: INFO: validating pod update-demo-nautilus-27l7t Apr 8 21:15:58.935: INFO: got data: { "image": "nautilus.jpg" } Apr 8 21:15:58.935: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 8 21:15:58.935: INFO: update-demo-nautilus-27l7t is verified up and running Apr 8 21:15:58.935: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8wzwc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4711' Apr 8 21:15:59.046: INFO: stderr: "" Apr 8 21:15:59.046: INFO: stdout: "true" Apr 8 21:15:59.046: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8wzwc -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4711' Apr 8 21:15:59.135: INFO: stderr: "" Apr 8 21:15:59.135: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 8 21:15:59.135: INFO: validating pod update-demo-nautilus-8wzwc Apr 8 21:15:59.141: INFO: got data: { "image": "nautilus.jpg" } Apr 8 21:15:59.141: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 8 21:15:59.141: INFO: update-demo-nautilus-8wzwc is verified up and running STEP: using delete to clean up resources Apr 8 21:15:59.141: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4711' Apr 8 21:15:59.253: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 8 21:15:59.253: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Apr 8 21:15:59.254: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-4711' Apr 8 21:15:59.366: INFO: stderr: "No resources found in kubectl-4711 namespace.\n" Apr 8 21:15:59.366: INFO: stdout: "" Apr 8 21:15:59.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-4711 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 8 21:15:59.459: INFO: stderr: "" Apr 8 21:15:59.459: INFO: stdout: "update-demo-nautilus-27l7t\nupdate-demo-nautilus-8wzwc\n" Apr 8 21:15:59.959: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-4711' Apr 8 21:16:00.051: INFO: stderr: "No resources found in kubectl-4711 namespace.\n" Apr 8 21:16:00.051: INFO: stdout: "" Apr 8 21:16:00.051: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-4711 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 8 21:16:00.142: INFO: stderr: "" Apr 8 21:16:00.142: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:16:00.142: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4711" for this suite. • [SLOW TEST:20.613 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:322 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":278,"completed":22,"skipped":339,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:16:00.151: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Apr 8 21:16:00.371: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-9062 /api/v1/namespaces/watch-9062/configmaps/e2e-watch-test-configmap-a 94644a6a-34a5-4ed6-9580-bdcbc0779ece 6502117 0 2020-04-08 21:16:00 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Apr 8 21:16:00.371: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-9062 /api/v1/namespaces/watch-9062/configmaps/e2e-watch-test-configmap-a 94644a6a-34a5-4ed6-9580-bdcbc0779ece 6502117 0 2020-04-08 21:16:00 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification Apr 8 21:16:10.379: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-9062 /api/v1/namespaces/watch-9062/configmaps/e2e-watch-test-configmap-a 94644a6a-34a5-4ed6-9580-bdcbc0779ece 6502176 0 2020-04-08 21:16:00 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Apr 8 21:16:10.379: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-9062 /api/v1/namespaces/watch-9062/configmaps/e2e-watch-test-configmap-a 94644a6a-34a5-4ed6-9580-bdcbc0779ece 6502176 0 2020-04-08 21:16:00 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Apr 8 21:16:20.392: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-9062 /api/v1/namespaces/watch-9062/configmaps/e2e-watch-test-configmap-a 94644a6a-34a5-4ed6-9580-bdcbc0779ece 6502208 0 2020-04-08 21:16:00 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Apr 8 21:16:20.392: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-9062 /api/v1/namespaces/watch-9062/configmaps/e2e-watch-test-configmap-a 94644a6a-34a5-4ed6-9580-bdcbc0779ece 6502208 0 2020-04-08 21:16:00 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification Apr 8 21:16:30.422: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-9062 /api/v1/namespaces/watch-9062/configmaps/e2e-watch-test-configmap-a 94644a6a-34a5-4ed6-9580-bdcbc0779ece 6502238 0 2020-04-08 21:16:00 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Apr 8 21:16:30.422: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-9062 /api/v1/namespaces/watch-9062/configmaps/e2e-watch-test-configmap-a 94644a6a-34a5-4ed6-9580-bdcbc0779ece 6502238 0 2020-04-08 21:16:00 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Apr 8 21:16:40.448: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-9062 /api/v1/namespaces/watch-9062/configmaps/e2e-watch-test-configmap-b 5425ff7a-2c3d-4e0e-a866-fb92e4a33e44 6502266 0 2020-04-08 21:16:40 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Apr 8 21:16:40.448: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-9062 /api/v1/namespaces/watch-9062/configmaps/e2e-watch-test-configmap-b 5425ff7a-2c3d-4e0e-a866-fb92e4a33e44 6502266 0 2020-04-08 21:16:40 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification Apr 8 21:16:50.456: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-9062 /api/v1/namespaces/watch-9062/configmaps/e2e-watch-test-configmap-b 5425ff7a-2c3d-4e0e-a866-fb92e4a33e44 6502296 0 2020-04-08 21:16:40 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Apr 8 21:16:50.456: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-9062 /api/v1/namespaces/watch-9062/configmaps/e2e-watch-test-configmap-b 5425ff7a-2c3d-4e0e-a866-fb92e4a33e44 6502296 0 2020-04-08 21:16:40 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:17:00.456: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-9062" for this suite. • [SLOW TEST:60.315 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":278,"completed":23,"skipped":364,"failed":0} SSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:17:00.467: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod Apr 8 21:17:05.087: INFO: Successfully updated pod "labelsupdate62525b34-1c85-4fc2-9686-2c1713033ec8" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:17:07.107: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3709" for this suite. • [SLOW TEST:6.649 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":24,"skipped":367,"failed":0} [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:17:07.116: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-map-c5d40f89-d1f9-42d5-9237-aad9ee8856a9 STEP: Creating a pod to test consume secrets Apr 8 21:17:07.225: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-99c954d5-0f50-4e14-9038-02285e89d059" in namespace "projected-7613" to be "success or failure" Apr 8 21:17:07.238: INFO: Pod "pod-projected-secrets-99c954d5-0f50-4e14-9038-02285e89d059": Phase="Pending", Reason="", readiness=false. Elapsed: 13.101679ms Apr 8 21:17:09.243: INFO: Pod "pod-projected-secrets-99c954d5-0f50-4e14-9038-02285e89d059": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017389529s Apr 8 21:17:11.247: INFO: Pod "pod-projected-secrets-99c954d5-0f50-4e14-9038-02285e89d059": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022089464s STEP: Saw pod success Apr 8 21:17:11.247: INFO: Pod "pod-projected-secrets-99c954d5-0f50-4e14-9038-02285e89d059" satisfied condition "success or failure" Apr 8 21:17:11.251: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-99c954d5-0f50-4e14-9038-02285e89d059 container projected-secret-volume-test: STEP: delete the pod Apr 8 21:17:11.289: INFO: Waiting for pod pod-projected-secrets-99c954d5-0f50-4e14-9038-02285e89d059 to disappear Apr 8 21:17:11.298: INFO: Pod pod-projected-secrets-99c954d5-0f50-4e14-9038-02285e89d059 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:17:11.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7613" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":25,"skipped":367,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:17:11.304: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... Apr 8 21:17:11.353: INFO: Created pod &Pod{ObjectMeta:{dns-5238 dns-5238 /api/v1/namespaces/dns-5238/pods/dns-5238 9633bf12-e360-4ceb-8578-2b630f81e1d0 6502397 0 2020-04-08 21:17:11 +0000 UTC map[] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ttncr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ttncr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ttncr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: Verifying customized DNS suffix list is configured on pod... Apr 8 21:17:15.377: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-5238 PodName:dns-5238 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 8 21:17:15.377: INFO: >>> kubeConfig: /root/.kube/config I0408 21:17:15.416873 6 log.go:172] (0xc0012ce4d0) (0xc002286dc0) Create stream I0408 21:17:15.416902 6 log.go:172] (0xc0012ce4d0) (0xc002286dc0) Stream added, broadcasting: 1 I0408 21:17:15.419530 6 log.go:172] (0xc0012ce4d0) Reply frame received for 1 I0408 21:17:15.419594 6 log.go:172] (0xc0012ce4d0) (0xc002396140) Create stream I0408 21:17:15.419612 6 log.go:172] (0xc0012ce4d0) (0xc002396140) Stream added, broadcasting: 3 I0408 21:17:15.420773 6 log.go:172] (0xc0012ce4d0) Reply frame received for 3 I0408 21:17:15.420803 6 log.go:172] (0xc0012ce4d0) (0xc002286e60) Create stream I0408 21:17:15.420814 6 log.go:172] (0xc0012ce4d0) (0xc002286e60) Stream added, broadcasting: 5 I0408 21:17:15.421849 6 log.go:172] (0xc0012ce4d0) Reply frame received for 5 I0408 21:17:15.490718 6 log.go:172] (0xc0012ce4d0) Data frame received for 3 I0408 21:17:15.490781 6 log.go:172] (0xc002396140) (3) Data frame handling I0408 21:17:15.490808 6 log.go:172] (0xc002396140) (3) Data frame sent I0408 21:17:15.491527 6 log.go:172] (0xc0012ce4d0) Data frame received for 3 I0408 21:17:15.491549 6 log.go:172] (0xc002396140) (3) Data frame handling I0408 21:17:15.491622 6 log.go:172] (0xc0012ce4d0) Data frame received for 5 I0408 21:17:15.491665 6 log.go:172] (0xc002286e60) (5) Data frame handling I0408 21:17:15.493587 6 log.go:172] (0xc0012ce4d0) Data frame received for 1 I0408 21:17:15.493639 6 log.go:172] (0xc002286dc0) (1) Data frame handling I0408 21:17:15.493686 6 log.go:172] (0xc002286dc0) (1) Data frame sent I0408 21:17:15.493717 6 log.go:172] (0xc0012ce4d0) (0xc002286dc0) Stream removed, broadcasting: 1 I0408 21:17:15.493777 6 log.go:172] (0xc0012ce4d0) Go away received I0408 21:17:15.494136 6 log.go:172] (0xc0012ce4d0) (0xc002286dc0) Stream removed, broadcasting: 1 I0408 21:17:15.494160 6 log.go:172] (0xc0012ce4d0) (0xc002396140) Stream removed, broadcasting: 3 I0408 21:17:15.494172 6 log.go:172] (0xc0012ce4d0) (0xc002286e60) Stream removed, broadcasting: 5 STEP: Verifying customized DNS server is configured on pod... Apr 8 21:17:15.494: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-5238 PodName:dns-5238 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 8 21:17:15.494: INFO: >>> kubeConfig: /root/.kube/config I0408 21:17:15.522263 6 log.go:172] (0xc001a6ef20) (0xc001b0dea0) Create stream I0408 21:17:15.522292 6 log.go:172] (0xc001a6ef20) (0xc001b0dea0) Stream added, broadcasting: 1 I0408 21:17:15.524635 6 log.go:172] (0xc001a6ef20) Reply frame received for 1 I0408 21:17:15.524689 6 log.go:172] (0xc001a6ef20) (0xc002908000) Create stream I0408 21:17:15.524709 6 log.go:172] (0xc001a6ef20) (0xc002908000) Stream added, broadcasting: 3 I0408 21:17:15.526066 6 log.go:172] (0xc001a6ef20) Reply frame received for 3 I0408 21:17:15.526105 6 log.go:172] (0xc001a6ef20) (0xc0029080a0) Create stream I0408 21:17:15.526116 6 log.go:172] (0xc001a6ef20) (0xc0029080a0) Stream added, broadcasting: 5 I0408 21:17:15.527081 6 log.go:172] (0xc001a6ef20) Reply frame received for 5 I0408 21:17:15.616571 6 log.go:172] (0xc001a6ef20) Data frame received for 3 I0408 21:17:15.616597 6 log.go:172] (0xc002908000) (3) Data frame handling I0408 21:17:15.616620 6 log.go:172] (0xc002908000) (3) Data frame sent I0408 21:17:15.617664 6 log.go:172] (0xc001a6ef20) Data frame received for 5 I0408 21:17:15.617694 6 log.go:172] (0xc0029080a0) (5) Data frame handling I0408 21:17:15.617719 6 log.go:172] (0xc001a6ef20) Data frame received for 3 I0408 21:17:15.617733 6 log.go:172] (0xc002908000) (3) Data frame handling I0408 21:17:15.619255 6 log.go:172] (0xc001a6ef20) Data frame received for 1 I0408 21:17:15.619286 6 log.go:172] (0xc001b0dea0) (1) Data frame handling I0408 21:17:15.619318 6 log.go:172] (0xc001b0dea0) (1) Data frame sent I0408 21:17:15.619348 6 log.go:172] (0xc001a6ef20) (0xc001b0dea0) Stream removed, broadcasting: 1 I0408 21:17:15.619379 6 log.go:172] (0xc001a6ef20) Go away received I0408 21:17:15.619526 6 log.go:172] (0xc001a6ef20) (0xc001b0dea0) Stream removed, broadcasting: 1 I0408 21:17:15.619564 6 log.go:172] (0xc001a6ef20) (0xc002908000) Stream removed, broadcasting: 3 I0408 21:17:15.619594 6 log.go:172] (0xc001a6ef20) (0xc0029080a0) Stream removed, broadcasting: 5 Apr 8 21:17:15.619: INFO: Deleting pod dns-5238... [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:17:15.635: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5238" for this suite. •{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":278,"completed":26,"skipped":381,"failed":0} SSS ------------------------------ [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:17:15.690: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1754 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Apr 8 21:17:15.738: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-1460' Apr 8 21:17:15.918: INFO: stderr: "" Apr 8 21:17:15.918: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1759 Apr 8 21:17:15.921: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-1460' Apr 8 21:17:18.385: INFO: stderr: "" Apr 8 21:17:18.385: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:17:18.385: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1460" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":278,"completed":27,"skipped":384,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:17:18.392: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Apr 8 21:17:28.523: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-7687 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 8 21:17:28.523: INFO: >>> kubeConfig: /root/.kube/config I0408 21:17:28.557876 6 log.go:172] (0xc0012cec60) (0xc002287d60) Create stream I0408 21:17:28.557903 6 log.go:172] (0xc0012cec60) (0xc002287d60) Stream added, broadcasting: 1 I0408 21:17:28.559933 6 log.go:172] (0xc0012cec60) Reply frame received for 1 I0408 21:17:28.559968 6 log.go:172] (0xc0012cec60) (0xc0023c74a0) Create stream I0408 21:17:28.559986 6 log.go:172] (0xc0012cec60) (0xc0023c74a0) Stream added, broadcasting: 3 I0408 21:17:28.561262 6 log.go:172] (0xc0012cec60) Reply frame received for 3 I0408 21:17:28.561314 6 log.go:172] (0xc0012cec60) (0xc002908aa0) Create stream I0408 21:17:28.561336 6 log.go:172] (0xc0012cec60) (0xc002908aa0) Stream added, broadcasting: 5 I0408 21:17:28.562460 6 log.go:172] (0xc0012cec60) Reply frame received for 5 I0408 21:17:28.640996 6 log.go:172] (0xc0012cec60) Data frame received for 3 I0408 21:17:28.641034 6 log.go:172] (0xc0023c74a0) (3) Data frame handling I0408 21:17:28.641046 6 log.go:172] (0xc0023c74a0) (3) Data frame sent I0408 21:17:28.641061 6 log.go:172] (0xc0012cec60) Data frame received for 3 I0408 21:17:28.641089 6 log.go:172] (0xc0023c74a0) (3) Data frame handling I0408 21:17:28.641195 6 log.go:172] (0xc0012cec60) Data frame received for 5 I0408 21:17:28.641226 6 log.go:172] (0xc002908aa0) (5) Data frame handling I0408 21:17:28.642931 6 log.go:172] (0xc0012cec60) Data frame received for 1 I0408 21:17:28.642953 6 log.go:172] (0xc002287d60) (1) Data frame handling I0408 21:17:28.642971 6 log.go:172] (0xc002287d60) (1) Data frame sent I0408 21:17:28.642983 6 log.go:172] (0xc0012cec60) (0xc002287d60) Stream removed, broadcasting: 1 I0408 21:17:28.643004 6 log.go:172] (0xc0012cec60) Go away received I0408 21:17:28.643166 6 log.go:172] (0xc0012cec60) (0xc002287d60) Stream removed, broadcasting: 1 I0408 21:17:28.643205 6 log.go:172] (0xc0012cec60) (0xc0023c74a0) Stream removed, broadcasting: 3 I0408 21:17:28.643234 6 log.go:172] (0xc0012cec60) (0xc002908aa0) Stream removed, broadcasting: 5 Apr 8 21:17:28.643: INFO: Exec stderr: "" Apr 8 21:17:28.643: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-7687 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 8 21:17:28.643: INFO: >>> kubeConfig: /root/.kube/config I0408 21:17:28.674296 6 log.go:172] (0xc0012cf290) (0xc00221a140) Create stream I0408 21:17:28.674323 6 log.go:172] (0xc0012cf290) (0xc00221a140) Stream added, broadcasting: 1 I0408 21:17:28.676144 6 log.go:172] (0xc0012cf290) Reply frame received for 1 I0408 21:17:28.676169 6 log.go:172] (0xc0012cf290) (0xc002908b40) Create stream I0408 21:17:28.676179 6 log.go:172] (0xc0012cf290) (0xc002908b40) Stream added, broadcasting: 3 I0408 21:17:28.677386 6 log.go:172] (0xc0012cf290) Reply frame received for 3 I0408 21:17:28.677425 6 log.go:172] (0xc0012cf290) (0xc002397220) Create stream I0408 21:17:28.677444 6 log.go:172] (0xc0012cf290) (0xc002397220) Stream added, broadcasting: 5 I0408 21:17:28.678526 6 log.go:172] (0xc0012cf290) Reply frame received for 5 I0408 21:17:28.742061 6 log.go:172] (0xc0012cf290) Data frame received for 3 I0408 21:17:28.742098 6 log.go:172] (0xc002908b40) (3) Data frame handling I0408 21:17:28.742116 6 log.go:172] (0xc002908b40) (3) Data frame sent I0408 21:17:28.742130 6 log.go:172] (0xc0012cf290) Data frame received for 3 I0408 21:17:28.742143 6 log.go:172] (0xc002908b40) (3) Data frame handling I0408 21:17:28.742185 6 log.go:172] (0xc0012cf290) Data frame received for 5 I0408 21:17:28.742213 6 log.go:172] (0xc002397220) (5) Data frame handling I0408 21:17:28.743592 6 log.go:172] (0xc0012cf290) Data frame received for 1 I0408 21:17:28.743625 6 log.go:172] (0xc00221a140) (1) Data frame handling I0408 21:17:28.743652 6 log.go:172] (0xc00221a140) (1) Data frame sent I0408 21:17:28.743681 6 log.go:172] (0xc0012cf290) (0xc00221a140) Stream removed, broadcasting: 1 I0408 21:17:28.743799 6 log.go:172] (0xc0012cf290) (0xc00221a140) Stream removed, broadcasting: 1 I0408 21:17:28.743819 6 log.go:172] (0xc0012cf290) (0xc002908b40) Stream removed, broadcasting: 3 I0408 21:17:28.743831 6 log.go:172] (0xc0012cf290) (0xc002397220) Stream removed, broadcasting: 5 Apr 8 21:17:28.743: INFO: Exec stderr: "" Apr 8 21:17:28.743: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-7687 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 8 21:17:28.743: INFO: >>> kubeConfig: /root/.kube/config I0408 21:17:28.745365 6 log.go:172] (0xc0012cf290) Go away received I0408 21:17:28.778212 6 log.go:172] (0xc0012cf8c0) (0xc00221a320) Create stream I0408 21:17:28.778234 6 log.go:172] (0xc0012cf8c0) (0xc00221a320) Stream added, broadcasting: 1 I0408 21:17:28.780834 6 log.go:172] (0xc0012cf8c0) Reply frame received for 1 I0408 21:17:28.780885 6 log.go:172] (0xc0012cf8c0) (0xc0023c7540) Create stream I0408 21:17:28.780906 6 log.go:172] (0xc0012cf8c0) (0xc0023c7540) Stream added, broadcasting: 3 I0408 21:17:28.782168 6 log.go:172] (0xc0012cf8c0) Reply frame received for 3 I0408 21:17:28.782225 6 log.go:172] (0xc0012cf8c0) (0xc0023c75e0) Create stream I0408 21:17:28.782241 6 log.go:172] (0xc0012cf8c0) (0xc0023c75e0) Stream added, broadcasting: 5 I0408 21:17:28.783208 6 log.go:172] (0xc0012cf8c0) Reply frame received for 5 I0408 21:17:28.841503 6 log.go:172] (0xc0012cf8c0) Data frame received for 5 I0408 21:17:28.841551 6 log.go:172] (0xc0023c75e0) (5) Data frame handling I0408 21:17:28.841591 6 log.go:172] (0xc0012cf8c0) Data frame received for 3 I0408 21:17:28.841612 6 log.go:172] (0xc0023c7540) (3) Data frame handling I0408 21:17:28.841631 6 log.go:172] (0xc0023c7540) (3) Data frame sent I0408 21:17:28.841643 6 log.go:172] (0xc0012cf8c0) Data frame received for 3 I0408 21:17:28.841652 6 log.go:172] (0xc0023c7540) (3) Data frame handling I0408 21:17:28.842905 6 log.go:172] (0xc0012cf8c0) Data frame received for 1 I0408 21:17:28.842931 6 log.go:172] (0xc00221a320) (1) Data frame handling I0408 21:17:28.842954 6 log.go:172] (0xc00221a320) (1) Data frame sent I0408 21:17:28.842969 6 log.go:172] (0xc0012cf8c0) (0xc00221a320) Stream removed, broadcasting: 1 I0408 21:17:28.843064 6 log.go:172] (0xc0012cf8c0) (0xc00221a320) Stream removed, broadcasting: 1 I0408 21:17:28.843088 6 log.go:172] (0xc0012cf8c0) (0xc0023c7540) Stream removed, broadcasting: 3 I0408 21:17:28.843186 6 log.go:172] (0xc0012cf8c0) Go away received I0408 21:17:28.843243 6 log.go:172] (0xc0012cf8c0) (0xc0023c75e0) Stream removed, broadcasting: 5 Apr 8 21:17:28.843: INFO: Exec stderr: "" Apr 8 21:17:28.843: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-7687 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 8 21:17:28.843: INFO: >>> kubeConfig: /root/.kube/config I0408 21:17:28.879186 6 log.go:172] (0xc001a6f760) (0xc0023c7860) Create stream I0408 21:17:28.879223 6 log.go:172] (0xc001a6f760) (0xc0023c7860) Stream added, broadcasting: 1 I0408 21:17:28.882373 6 log.go:172] (0xc001a6f760) Reply frame received for 1 I0408 21:17:28.882416 6 log.go:172] (0xc001a6f760) (0xc0023972c0) Create stream I0408 21:17:28.882432 6 log.go:172] (0xc001a6f760) (0xc0023972c0) Stream added, broadcasting: 3 I0408 21:17:28.883271 6 log.go:172] (0xc001a6f760) Reply frame received for 3 I0408 21:17:28.883306 6 log.go:172] (0xc001a6f760) (0xc002908c80) Create stream I0408 21:17:28.883318 6 log.go:172] (0xc001a6f760) (0xc002908c80) Stream added, broadcasting: 5 I0408 21:17:28.884287 6 log.go:172] (0xc001a6f760) Reply frame received for 5 I0408 21:17:28.955862 6 log.go:172] (0xc001a6f760) Data frame received for 5 I0408 21:17:28.955913 6 log.go:172] (0xc002908c80) (5) Data frame handling I0408 21:17:28.955949 6 log.go:172] (0xc001a6f760) Data frame received for 3 I0408 21:17:28.955968 6 log.go:172] (0xc0023972c0) (3) Data frame handling I0408 21:17:28.955986 6 log.go:172] (0xc0023972c0) (3) Data frame sent I0408 21:17:28.955995 6 log.go:172] (0xc001a6f760) Data frame received for 3 I0408 21:17:28.956009 6 log.go:172] (0xc0023972c0) (3) Data frame handling I0408 21:17:28.957525 6 log.go:172] (0xc001a6f760) Data frame received for 1 I0408 21:17:28.957566 6 log.go:172] (0xc0023c7860) (1) Data frame handling I0408 21:17:28.957592 6 log.go:172] (0xc0023c7860) (1) Data frame sent I0408 21:17:28.957611 6 log.go:172] (0xc001a6f760) (0xc0023c7860) Stream removed, broadcasting: 1 I0408 21:17:28.957628 6 log.go:172] (0xc001a6f760) Go away received I0408 21:17:28.957784 6 log.go:172] (0xc001a6f760) (0xc0023c7860) Stream removed, broadcasting: 1 I0408 21:17:28.957803 6 log.go:172] (0xc001a6f760) (0xc0023972c0) Stream removed, broadcasting: 3 I0408 21:17:28.957812 6 log.go:172] (0xc001a6f760) (0xc002908c80) Stream removed, broadcasting: 5 Apr 8 21:17:28.957: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Apr 8 21:17:28.957: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-7687 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 8 21:17:28.957: INFO: >>> kubeConfig: /root/.kube/config I0408 21:17:28.988586 6 log.go:172] (0xc0016c0580) (0xc002908f00) Create stream I0408 21:17:28.988621 6 log.go:172] (0xc0016c0580) (0xc002908f00) Stream added, broadcasting: 1 I0408 21:17:28.991747 6 log.go:172] (0xc0016c0580) Reply frame received for 1 I0408 21:17:28.991783 6 log.go:172] (0xc0016c0580) (0xc00221a460) Create stream I0408 21:17:28.991796 6 log.go:172] (0xc0016c0580) (0xc00221a460) Stream added, broadcasting: 3 I0408 21:17:28.992572 6 log.go:172] (0xc0016c0580) Reply frame received for 3 I0408 21:17:28.992604 6 log.go:172] (0xc0016c0580) (0xc0023c7900) Create stream I0408 21:17:28.992615 6 log.go:172] (0xc0016c0580) (0xc0023c7900) Stream added, broadcasting: 5 I0408 21:17:28.993603 6 log.go:172] (0xc0016c0580) Reply frame received for 5 I0408 21:17:29.058301 6 log.go:172] (0xc0016c0580) Data frame received for 3 I0408 21:17:29.058351 6 log.go:172] (0xc00221a460) (3) Data frame handling I0408 21:17:29.058368 6 log.go:172] (0xc00221a460) (3) Data frame sent I0408 21:17:29.058400 6 log.go:172] (0xc0016c0580) Data frame received for 3 I0408 21:17:29.058417 6 log.go:172] (0xc00221a460) (3) Data frame handling I0408 21:17:29.058438 6 log.go:172] (0xc0016c0580) Data frame received for 5 I0408 21:17:29.058453 6 log.go:172] (0xc0023c7900) (5) Data frame handling I0408 21:17:29.059752 6 log.go:172] (0xc0016c0580) Data frame received for 1 I0408 21:17:29.059775 6 log.go:172] (0xc002908f00) (1) Data frame handling I0408 21:17:29.059804 6 log.go:172] (0xc002908f00) (1) Data frame sent I0408 21:17:29.059820 6 log.go:172] (0xc0016c0580) (0xc002908f00) Stream removed, broadcasting: 1 I0408 21:17:29.059835 6 log.go:172] (0xc0016c0580) Go away received I0408 21:17:29.059981 6 log.go:172] (0xc0016c0580) (0xc002908f00) Stream removed, broadcasting: 1 I0408 21:17:29.059997 6 log.go:172] (0xc0016c0580) (0xc00221a460) Stream removed, broadcasting: 3 I0408 21:17:29.060006 6 log.go:172] (0xc0016c0580) (0xc0023c7900) Stream removed, broadcasting: 5 Apr 8 21:17:29.060: INFO: Exec stderr: "" Apr 8 21:17:29.060: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-7687 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 8 21:17:29.060: INFO: >>> kubeConfig: /root/.kube/config I0408 21:17:29.094795 6 log.go:172] (0xc0057202c0) (0xc0023975e0) Create stream I0408 21:17:29.094829 6 log.go:172] (0xc0057202c0) (0xc0023975e0) Stream added, broadcasting: 1 I0408 21:17:29.097920 6 log.go:172] (0xc0057202c0) Reply frame received for 1 I0408 21:17:29.097968 6 log.go:172] (0xc0057202c0) (0xc0023c79a0) Create stream I0408 21:17:29.097990 6 log.go:172] (0xc0057202c0) (0xc0023c79a0) Stream added, broadcasting: 3 I0408 21:17:29.098958 6 log.go:172] (0xc0057202c0) Reply frame received for 3 I0408 21:17:29.098988 6 log.go:172] (0xc0057202c0) (0xc002397720) Create stream I0408 21:17:29.099006 6 log.go:172] (0xc0057202c0) (0xc002397720) Stream added, broadcasting: 5 I0408 21:17:29.099868 6 log.go:172] (0xc0057202c0) Reply frame received for 5 I0408 21:17:29.156885 6 log.go:172] (0xc0057202c0) Data frame received for 3 I0408 21:17:29.156932 6 log.go:172] (0xc0023c79a0) (3) Data frame handling I0408 21:17:29.156961 6 log.go:172] (0xc0023c79a0) (3) Data frame sent I0408 21:17:29.156983 6 log.go:172] (0xc0057202c0) Data frame received for 3 I0408 21:17:29.157001 6 log.go:172] (0xc0023c79a0) (3) Data frame handling I0408 21:17:29.157743 6 log.go:172] (0xc0057202c0) Data frame received for 5 I0408 21:17:29.157766 6 log.go:172] (0xc002397720) (5) Data frame handling I0408 21:17:29.166419 6 log.go:172] (0xc0057202c0) Data frame received for 1 I0408 21:17:29.166442 6 log.go:172] (0xc0023975e0) (1) Data frame handling I0408 21:17:29.166461 6 log.go:172] (0xc0023975e0) (1) Data frame sent I0408 21:17:29.166473 6 log.go:172] (0xc0057202c0) (0xc0023975e0) Stream removed, broadcasting: 1 I0408 21:17:29.166525 6 log.go:172] (0xc0057202c0) (0xc0023975e0) Stream removed, broadcasting: 1 I0408 21:17:29.166531 6 log.go:172] (0xc0057202c0) (0xc0023c79a0) Stream removed, broadcasting: 3 I0408 21:17:29.166716 6 log.go:172] (0xc0057202c0) Go away received I0408 21:17:29.166747 6 log.go:172] (0xc0057202c0) (0xc002397720) Stream removed, broadcasting: 5 Apr 8 21:17:29.166: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Apr 8 21:17:29.166: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-7687 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 8 21:17:29.166: INFO: >>> kubeConfig: /root/.kube/config I0408 21:17:29.189898 6 log.go:172] (0xc0016c0bb0) (0xc002909040) Create stream I0408 21:17:29.189917 6 log.go:172] (0xc0016c0bb0) (0xc002909040) Stream added, broadcasting: 1 I0408 21:17:29.192277 6 log.go:172] (0xc0016c0bb0) Reply frame received for 1 I0408 21:17:29.192318 6 log.go:172] (0xc0016c0bb0) (0xc0023c7ae0) Create stream I0408 21:17:29.192336 6 log.go:172] (0xc0016c0bb0) (0xc0023c7ae0) Stream added, broadcasting: 3 I0408 21:17:29.193327 6 log.go:172] (0xc0016c0bb0) Reply frame received for 3 I0408 21:17:29.193369 6 log.go:172] (0xc0016c0bb0) (0xc00221a500) Create stream I0408 21:17:29.193381 6 log.go:172] (0xc0016c0bb0) (0xc00221a500) Stream added, broadcasting: 5 I0408 21:17:29.194281 6 log.go:172] (0xc0016c0bb0) Reply frame received for 5 I0408 21:17:29.268962 6 log.go:172] (0xc0016c0bb0) Data frame received for 5 I0408 21:17:29.269018 6 log.go:172] (0xc00221a500) (5) Data frame handling I0408 21:17:29.269060 6 log.go:172] (0xc0016c0bb0) Data frame received for 3 I0408 21:17:29.269098 6 log.go:172] (0xc0023c7ae0) (3) Data frame handling I0408 21:17:29.269286 6 log.go:172] (0xc0023c7ae0) (3) Data frame sent I0408 21:17:29.269307 6 log.go:172] (0xc0016c0bb0) Data frame received for 3 I0408 21:17:29.269317 6 log.go:172] (0xc0023c7ae0) (3) Data frame handling I0408 21:17:29.270811 6 log.go:172] (0xc0016c0bb0) Data frame received for 1 I0408 21:17:29.270838 6 log.go:172] (0xc002909040) (1) Data frame handling I0408 21:17:29.270876 6 log.go:172] (0xc002909040) (1) Data frame sent I0408 21:17:29.270893 6 log.go:172] (0xc0016c0bb0) (0xc002909040) Stream removed, broadcasting: 1 I0408 21:17:29.270911 6 log.go:172] (0xc0016c0bb0) Go away received I0408 21:17:29.271041 6 log.go:172] (0xc0016c0bb0) (0xc002909040) Stream removed, broadcasting: 1 I0408 21:17:29.271058 6 log.go:172] (0xc0016c0bb0) (0xc0023c7ae0) Stream removed, broadcasting: 3 I0408 21:17:29.271066 6 log.go:172] (0xc0016c0bb0) (0xc00221a500) Stream removed, broadcasting: 5 Apr 8 21:17:29.271: INFO: Exec stderr: "" Apr 8 21:17:29.271: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-7687 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 8 21:17:29.271: INFO: >>> kubeConfig: /root/.kube/config I0408 21:17:29.304314 6 log.go:172] (0xc002779760) (0xc001ebc1e0) Create stream I0408 21:17:29.304339 6 log.go:172] (0xc002779760) (0xc001ebc1e0) Stream added, broadcasting: 1 I0408 21:17:29.306798 6 log.go:172] (0xc002779760) Reply frame received for 1 I0408 21:17:29.306841 6 log.go:172] (0xc002779760) (0xc002397860) Create stream I0408 21:17:29.306859 6 log.go:172] (0xc002779760) (0xc002397860) Stream added, broadcasting: 3 I0408 21:17:29.307596 6 log.go:172] (0xc002779760) Reply frame received for 3 I0408 21:17:29.307641 6 log.go:172] (0xc002779760) (0xc002397900) Create stream I0408 21:17:29.307651 6 log.go:172] (0xc002779760) (0xc002397900) Stream added, broadcasting: 5 I0408 21:17:29.308571 6 log.go:172] (0xc002779760) Reply frame received for 5 I0408 21:17:29.381751 6 log.go:172] (0xc002779760) Data frame received for 5 I0408 21:17:29.381819 6 log.go:172] (0xc002397900) (5) Data frame handling I0408 21:17:29.381878 6 log.go:172] (0xc002779760) Data frame received for 3 I0408 21:17:29.381894 6 log.go:172] (0xc002397860) (3) Data frame handling I0408 21:17:29.381916 6 log.go:172] (0xc002397860) (3) Data frame sent I0408 21:17:29.381927 6 log.go:172] (0xc002779760) Data frame received for 3 I0408 21:17:29.381961 6 log.go:172] (0xc002397860) (3) Data frame handling I0408 21:17:29.383358 6 log.go:172] (0xc002779760) Data frame received for 1 I0408 21:17:29.383388 6 log.go:172] (0xc001ebc1e0) (1) Data frame handling I0408 21:17:29.383405 6 log.go:172] (0xc001ebc1e0) (1) Data frame sent I0408 21:17:29.383419 6 log.go:172] (0xc002779760) (0xc001ebc1e0) Stream removed, broadcasting: 1 I0408 21:17:29.383516 6 log.go:172] (0xc002779760) Go away received I0408 21:17:29.383547 6 log.go:172] (0xc002779760) (0xc001ebc1e0) Stream removed, broadcasting: 1 I0408 21:17:29.383572 6 log.go:172] (0xc002779760) (0xc002397860) Stream removed, broadcasting: 3 I0408 21:17:29.383703 6 log.go:172] (0xc002779760) (0xc002397900) Stream removed, broadcasting: 5 Apr 8 21:17:29.383: INFO: Exec stderr: "" Apr 8 21:17:29.383: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-7687 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 8 21:17:29.383: INFO: >>> kubeConfig: /root/.kube/config I0408 21:17:29.422591 6 log.go:172] (0xc0016c11e0) (0xc002909220) Create stream I0408 21:17:29.422615 6 log.go:172] (0xc0016c11e0) (0xc002909220) Stream added, broadcasting: 1 I0408 21:17:29.425845 6 log.go:172] (0xc0016c11e0) Reply frame received for 1 I0408 21:17:29.425880 6 log.go:172] (0xc0016c11e0) (0xc001ebc280) Create stream I0408 21:17:29.425891 6 log.go:172] (0xc0016c11e0) (0xc001ebc280) Stream added, broadcasting: 3 I0408 21:17:29.426670 6 log.go:172] (0xc0016c11e0) Reply frame received for 3 I0408 21:17:29.426701 6 log.go:172] (0xc0016c11e0) (0xc00221a5a0) Create stream I0408 21:17:29.426712 6 log.go:172] (0xc0016c11e0) (0xc00221a5a0) Stream added, broadcasting: 5 I0408 21:17:29.427448 6 log.go:172] (0xc0016c11e0) Reply frame received for 5 I0408 21:17:29.485414 6 log.go:172] (0xc0016c11e0) Data frame received for 5 I0408 21:17:29.485449 6 log.go:172] (0xc0016c11e0) Data frame received for 3 I0408 21:17:29.485481 6 log.go:172] (0xc001ebc280) (3) Data frame handling I0408 21:17:29.485497 6 log.go:172] (0xc001ebc280) (3) Data frame sent I0408 21:17:29.485517 6 log.go:172] (0xc0016c11e0) Data frame received for 3 I0408 21:17:29.485544 6 log.go:172] (0xc001ebc280) (3) Data frame handling I0408 21:17:29.485598 6 log.go:172] (0xc00221a5a0) (5) Data frame handling I0408 21:17:29.487229 6 log.go:172] (0xc0016c11e0) Data frame received for 1 I0408 21:17:29.487251 6 log.go:172] (0xc002909220) (1) Data frame handling I0408 21:17:29.487268 6 log.go:172] (0xc002909220) (1) Data frame sent I0408 21:17:29.487288 6 log.go:172] (0xc0016c11e0) (0xc002909220) Stream removed, broadcasting: 1 I0408 21:17:29.487315 6 log.go:172] (0xc0016c11e0) Go away received I0408 21:17:29.487454 6 log.go:172] (0xc0016c11e0) (0xc002909220) Stream removed, broadcasting: 1 I0408 21:17:29.487479 6 log.go:172] (0xc0016c11e0) (0xc001ebc280) Stream removed, broadcasting: 3 I0408 21:17:29.487493 6 log.go:172] (0xc0016c11e0) (0xc00221a5a0) Stream removed, broadcasting: 5 Apr 8 21:17:29.487: INFO: Exec stderr: "" Apr 8 21:17:29.487: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-7687 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 8 21:17:29.487: INFO: >>> kubeConfig: /root/.kube/config I0408 21:17:29.522647 6 log.go:172] (0xc002779d90) (0xc001ebc5a0) Create stream I0408 21:17:29.522680 6 log.go:172] (0xc002779d90) (0xc001ebc5a0) Stream added, broadcasting: 1 I0408 21:17:29.525287 6 log.go:172] (0xc002779d90) Reply frame received for 1 I0408 21:17:29.525314 6 log.go:172] (0xc002779d90) (0xc001ebc640) Create stream I0408 21:17:29.525322 6 log.go:172] (0xc002779d90) (0xc001ebc640) Stream added, broadcasting: 3 I0408 21:17:29.526239 6 log.go:172] (0xc002779d90) Reply frame received for 3 I0408 21:17:29.526265 6 log.go:172] (0xc002779d90) (0xc002909360) Create stream I0408 21:17:29.526280 6 log.go:172] (0xc002779d90) (0xc002909360) Stream added, broadcasting: 5 I0408 21:17:29.527039 6 log.go:172] (0xc002779d90) Reply frame received for 5 I0408 21:17:29.581721 6 log.go:172] (0xc002779d90) Data frame received for 3 I0408 21:17:29.581764 6 log.go:172] (0xc001ebc640) (3) Data frame handling I0408 21:17:29.581784 6 log.go:172] (0xc001ebc640) (3) Data frame sent I0408 21:17:29.581801 6 log.go:172] (0xc002779d90) Data frame received for 3 I0408 21:17:29.581817 6 log.go:172] (0xc001ebc640) (3) Data frame handling I0408 21:17:29.581846 6 log.go:172] (0xc002779d90) Data frame received for 5 I0408 21:17:29.581864 6 log.go:172] (0xc002909360) (5) Data frame handling I0408 21:17:29.583489 6 log.go:172] (0xc002779d90) Data frame received for 1 I0408 21:17:29.583550 6 log.go:172] (0xc001ebc5a0) (1) Data frame handling I0408 21:17:29.583591 6 log.go:172] (0xc001ebc5a0) (1) Data frame sent I0408 21:17:29.583607 6 log.go:172] (0xc002779d90) (0xc001ebc5a0) Stream removed, broadcasting: 1 I0408 21:17:29.583622 6 log.go:172] (0xc002779d90) Go away received I0408 21:17:29.583749 6 log.go:172] (0xc002779d90) (0xc001ebc5a0) Stream removed, broadcasting: 1 I0408 21:17:29.583786 6 log.go:172] (0xc002779d90) (0xc001ebc640) Stream removed, broadcasting: 3 I0408 21:17:29.583804 6 log.go:172] (0xc002779d90) (0xc002909360) Stream removed, broadcasting: 5 Apr 8 21:17:29.583: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:17:29.583: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-7687" for this suite. • [SLOW TEST:11.199 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":28,"skipped":431,"failed":0} SSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:17:29.592: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-6e90ec0f-e722-4281-9e86-e96980318a06 STEP: Creating a pod to test consume secrets Apr 8 21:17:29.728: INFO: Waiting up to 5m0s for pod "pod-secrets-a7e72949-4218-44ee-92f3-929905ac727f" in namespace "secrets-3256" to be "success or failure" Apr 8 21:17:29.754: INFO: Pod "pod-secrets-a7e72949-4218-44ee-92f3-929905ac727f": Phase="Pending", Reason="", readiness=false. Elapsed: 25.679283ms Apr 8 21:17:31.758: INFO: Pod "pod-secrets-a7e72949-4218-44ee-92f3-929905ac727f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029583351s Apr 8 21:17:33.762: INFO: Pod "pod-secrets-a7e72949-4218-44ee-92f3-929905ac727f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033457963s STEP: Saw pod success Apr 8 21:17:33.762: INFO: Pod "pod-secrets-a7e72949-4218-44ee-92f3-929905ac727f" satisfied condition "success or failure" Apr 8 21:17:33.765: INFO: Trying to get logs from node jerma-worker pod pod-secrets-a7e72949-4218-44ee-92f3-929905ac727f container secret-volume-test: STEP: delete the pod Apr 8 21:17:33.835: INFO: Waiting for pod pod-secrets-a7e72949-4218-44ee-92f3-929905ac727f to disappear Apr 8 21:17:33.845: INFO: Pod pod-secrets-a7e72949-4218-44ee-92f3-929905ac727f no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:17:33.845: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3256" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":29,"skipped":436,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:17:33.861: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-7b454ba0-e493-41d5-9bb7-d6bb2818e92e STEP: Creating a pod to test consume secrets Apr 8 21:17:33.917: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-0989cf0b-2172-4165-af48-bf3cf63603af" in namespace "projected-6596" to be "success or failure" Apr 8 21:17:33.921: INFO: Pod "pod-projected-secrets-0989cf0b-2172-4165-af48-bf3cf63603af": Phase="Pending", Reason="", readiness=false. Elapsed: 3.689878ms Apr 8 21:17:35.953: INFO: Pod "pod-projected-secrets-0989cf0b-2172-4165-af48-bf3cf63603af": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035610869s Apr 8 21:17:37.957: INFO: Pod "pod-projected-secrets-0989cf0b-2172-4165-af48-bf3cf63603af": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.040143241s STEP: Saw pod success Apr 8 21:17:37.958: INFO: Pod "pod-projected-secrets-0989cf0b-2172-4165-af48-bf3cf63603af" satisfied condition "success or failure" Apr 8 21:17:37.960: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-0989cf0b-2172-4165-af48-bf3cf63603af container projected-secret-volume-test: STEP: delete the pod Apr 8 21:17:38.001: INFO: Waiting for pod pod-projected-secrets-0989cf0b-2172-4165-af48-bf3cf63603af to disappear Apr 8 21:17:38.007: INFO: Pod pod-projected-secrets-0989cf0b-2172-4165-af48-bf3cf63603af no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:17:38.007: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6596" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":30,"skipped":446,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:17:38.016: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Apr 8 21:17:42.666: INFO: Successfully updated pod "pod-update-d6ff5dd6-20b4-424b-9e7f-7320c9826278" STEP: verifying the updated pod is in kubernetes Apr 8 21:17:42.675: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:17:42.675: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6885" for this suite. •{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":278,"completed":31,"skipped":488,"failed":0} SSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:17:42.684: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 8 21:17:42.811: INFO: Waiting up to 5m0s for pod "downwardapi-volume-79ca1182-7d61-4138-8cb9-6dbaee8b0295" in namespace "projected-6143" to be "success or failure" Apr 8 21:17:42.831: INFO: Pod "downwardapi-volume-79ca1182-7d61-4138-8cb9-6dbaee8b0295": Phase="Pending", Reason="", readiness=false. Elapsed: 20.021442ms Apr 8 21:17:44.836: INFO: Pod "downwardapi-volume-79ca1182-7d61-4138-8cb9-6dbaee8b0295": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024245221s Apr 8 21:17:46.839: INFO: Pod "downwardapi-volume-79ca1182-7d61-4138-8cb9-6dbaee8b0295": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027800467s STEP: Saw pod success Apr 8 21:17:46.839: INFO: Pod "downwardapi-volume-79ca1182-7d61-4138-8cb9-6dbaee8b0295" satisfied condition "success or failure" Apr 8 21:17:46.842: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-79ca1182-7d61-4138-8cb9-6dbaee8b0295 container client-container: STEP: delete the pod Apr 8 21:17:46.859: INFO: Waiting for pod downwardapi-volume-79ca1182-7d61-4138-8cb9-6dbaee8b0295 to disappear Apr 8 21:17:46.864: INFO: Pod downwardapi-volume-79ca1182-7d61-4138-8cb9-6dbaee8b0295 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:17:46.864: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6143" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":32,"skipped":492,"failed":0} SSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:17:46.872: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap configmap-8168/configmap-test-dc6d4002-5bb7-416e-a1f9-ad00b5b23e43 STEP: Creating a pod to test consume configMaps Apr 8 21:17:46.958: INFO: Waiting up to 5m0s for pod "pod-configmaps-1b7922e0-d947-40a0-abb1-b1f28267ec24" in namespace "configmap-8168" to be "success or failure" Apr 8 21:17:46.966: INFO: Pod "pod-configmaps-1b7922e0-d947-40a0-abb1-b1f28267ec24": Phase="Pending", Reason="", readiness=false. Elapsed: 7.922518ms Apr 8 21:17:48.969: INFO: Pod "pod-configmaps-1b7922e0-d947-40a0-abb1-b1f28267ec24": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011068912s Apr 8 21:17:50.972: INFO: Pod "pod-configmaps-1b7922e0-d947-40a0-abb1-b1f28267ec24": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014443463s STEP: Saw pod success Apr 8 21:17:50.972: INFO: Pod "pod-configmaps-1b7922e0-d947-40a0-abb1-b1f28267ec24" satisfied condition "success or failure" Apr 8 21:17:50.974: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-1b7922e0-d947-40a0-abb1-b1f28267ec24 container env-test: STEP: delete the pod Apr 8 21:17:51.019: INFO: Waiting for pod pod-configmaps-1b7922e0-d947-40a0-abb1-b1f28267ec24 to disappear Apr 8 21:17:51.043: INFO: Pod pod-configmaps-1b7922e0-d947-40a0-abb1-b1f28267ec24 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:17:51.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8168" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":278,"completed":33,"skipped":503,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:17:51.054: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Starting the proxy Apr 8 21:17:51.098: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix013616993/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:17:51.163: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2069" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":278,"completed":34,"skipped":546,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:17:51.173: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod Apr 8 21:17:55.764: INFO: Successfully updated pod "labelsupdatec4b66d13-c6ab-4f1f-8119-7ce9e2bcf59e" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:17:57.779: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6589" for this suite. • [SLOW TEST:6.615 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":35,"skipped":585,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:17:57.789: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-nnxtg in namespace proxy-8199 I0408 21:17:57.924728 6 runners.go:189] Created replication controller with name: proxy-service-nnxtg, namespace: proxy-8199, replica count: 1 I0408 21:17:58.975122 6 runners.go:189] proxy-service-nnxtg Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0408 21:17:59.975320 6 runners.go:189] proxy-service-nnxtg Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0408 21:18:00.975541 6 runners.go:189] proxy-service-nnxtg Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0408 21:18:01.975746 6 runners.go:189] proxy-service-nnxtg Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0408 21:18:02.975955 6 runners.go:189] proxy-service-nnxtg Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0408 21:18:03.976225 6 runners.go:189] proxy-service-nnxtg Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0408 21:18:04.976483 6 runners.go:189] proxy-service-nnxtg Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0408 21:18:05.976697 6 runners.go:189] proxy-service-nnxtg Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0408 21:18:06.977001 6 runners.go:189] proxy-service-nnxtg Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0408 21:18:07.977310 6 runners.go:189] proxy-service-nnxtg Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0408 21:18:08.977502 6 runners.go:189] proxy-service-nnxtg Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0408 21:18:09.977719 6 runners.go:189] proxy-service-nnxtg Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0408 21:18:10.978030 6 runners.go:189] proxy-service-nnxtg Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 8 21:18:10.995: INFO: setup took 13.112664689s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Apr 8 21:18:11.003: INFO: (0) /api/v1/namespaces/proxy-8199/pods/proxy-service-nnxtg-v6p72:160/proxy/: foo (200; 7.122033ms) Apr 8 21:18:11.003: INFO: (0) /api/v1/namespaces/proxy-8199/pods/proxy-service-nnxtg-v6p72/proxy/: test (200; 7.012076ms) Apr 8 21:18:11.003: INFO: (0) /api/v1/namespaces/proxy-8199/pods/http:proxy-service-nnxtg-v6p72:160/proxy/: foo (200; 7.471619ms) Apr 8 21:18:11.003: INFO: (0) /api/v1/namespaces/proxy-8199/pods/http:proxy-service-nnxtg-v6p72:1080/proxy/: ... (200; 7.696131ms) Apr 8 21:18:11.003: INFO: (0) /api/v1/namespaces/proxy-8199/pods/http:proxy-service-nnxtg-v6p72:162/proxy/: bar (200; 7.773096ms) Apr 8 21:18:11.006: INFO: (0) /api/v1/namespaces/proxy-8199/pods/proxy-service-nnxtg-v6p72:1080/proxy/: test<... (200; 11.134856ms) Apr 8 21:18:11.007: INFO: (0) /api/v1/namespaces/proxy-8199/pods/proxy-service-nnxtg-v6p72:162/proxy/: bar (200; 11.165644ms) Apr 8 21:18:11.007: INFO: (0) /api/v1/namespaces/proxy-8199/services/proxy-service-nnxtg:portname1/proxy/: foo (200; 10.857553ms) Apr 8 21:18:11.007: INFO: (0) /api/v1/namespaces/proxy-8199/services/proxy-service-nnxtg:portname2/proxy/: bar (200; 11.242683ms) Apr 8 21:18:11.007: INFO: (0) /api/v1/namespaces/proxy-8199/services/http:proxy-service-nnxtg:portname1/proxy/: foo (200; 11.18268ms) Apr 8 21:18:11.008: INFO: (0) /api/v1/namespaces/proxy-8199/services/http:proxy-service-nnxtg:portname2/proxy/: bar (200; 12.083817ms) Apr 8 21:18:11.010: INFO: (0) /api/v1/namespaces/proxy-8199/services/https:proxy-service-nnxtg:tlsportname2/proxy/: tls qux (200; 13.886344ms) Apr 8 21:18:11.010: INFO: (0) /api/v1/namespaces/proxy-8199/pods/https:proxy-service-nnxtg-v6p72:462/proxy/: tls qux (200; 14.494139ms) Apr 8 21:18:11.010: INFO: (0) /api/v1/namespaces/proxy-8199/services/https:proxy-service-nnxtg:tlsportname1/proxy/: tls baz (200; 14.944636ms) Apr 8 21:18:11.011: INFO: (0) /api/v1/namespaces/proxy-8199/pods/https:proxy-service-nnxtg-v6p72:460/proxy/: tls baz (200; 15.211813ms) Apr 8 21:18:11.012: INFO: (0) /api/v1/namespaces/proxy-8199/pods/https:proxy-service-nnxtg-v6p72:443/proxy/: test (200; 3.042475ms) Apr 8 21:18:11.015: INFO: (1) /api/v1/namespaces/proxy-8199/pods/http:proxy-service-nnxtg-v6p72:1080/proxy/: ... (200; 3.179534ms) Apr 8 21:18:11.016: INFO: (1) /api/v1/namespaces/proxy-8199/pods/https:proxy-service-nnxtg-v6p72:443/proxy/: test<... (200; 5.67442ms) Apr 8 21:18:11.018: INFO: (1) /api/v1/namespaces/proxy-8199/pods/proxy-service-nnxtg-v6p72:162/proxy/: bar (200; 5.719647ms) Apr 8 21:18:11.018: INFO: (1) /api/v1/namespaces/proxy-8199/services/https:proxy-service-nnxtg:tlsportname1/proxy/: tls baz (200; 5.757486ms) Apr 8 21:18:11.020: INFO: (2) /api/v1/namespaces/proxy-8199/pods/https:proxy-service-nnxtg-v6p72:460/proxy/: tls baz (200; 2.15543ms) Apr 8 21:18:11.023: INFO: (2) /api/v1/namespaces/proxy-8199/pods/proxy-service-nnxtg-v6p72:160/proxy/: foo (200; 4.232214ms) Apr 8 21:18:11.023: INFO: (2) /api/v1/namespaces/proxy-8199/pods/proxy-service-nnxtg-v6p72:1080/proxy/: test<... (200; 4.375474ms) Apr 8 21:18:11.023: INFO: (2) /api/v1/namespaces/proxy-8199/services/https:proxy-service-nnxtg:tlsportname1/proxy/: tls baz (200; 5.155045ms) Apr 8 21:18:11.023: INFO: (2) /api/v1/namespaces/proxy-8199/pods/https:proxy-service-nnxtg-v6p72:443/proxy/: ... (200; 4.107994ms) Apr 8 21:18:11.023: INFO: (2) /api/v1/namespaces/proxy-8199/pods/proxy-service-nnxtg-v6p72:162/proxy/: bar (200; 4.451117ms) Apr 8 21:18:11.023: INFO: (2) /api/v1/namespaces/proxy-8199/pods/http:proxy-service-nnxtg-v6p72:162/proxy/: bar (200; 4.196452ms) Apr 8 21:18:11.023: INFO: (2) /api/v1/namespaces/proxy-8199/services/proxy-service-nnxtg:portname2/proxy/: bar (200; 5.390033ms) Apr 8 21:18:11.024: INFO: (2) /api/v1/namespaces/proxy-8199/services/http:proxy-service-nnxtg:portname2/proxy/: bar (200; 5.250101ms) Apr 8 21:18:11.024: INFO: (2) /api/v1/namespaces/proxy-8199/services/proxy-service-nnxtg:portname1/proxy/: foo (200; 5.208409ms) Apr 8 21:18:11.024: INFO: (2) /api/v1/namespaces/proxy-8199/services/http:proxy-service-nnxtg:portname1/proxy/: foo (200; 5.819424ms) Apr 8 21:18:11.024: INFO: (2) /api/v1/namespaces/proxy-8199/pods/https:proxy-service-nnxtg-v6p72:462/proxy/: tls qux (200; 4.920804ms) Apr 8 21:18:11.024: INFO: (2) /api/v1/namespaces/proxy-8199/pods/proxy-service-nnxtg-v6p72/proxy/: test (200; 5.897619ms) Apr 8 21:18:11.024: INFO: (2) /api/v1/namespaces/proxy-8199/services/https:proxy-service-nnxtg:tlsportname2/proxy/: tls qux (200; 5.99044ms) Apr 8 21:18:11.028: INFO: (3) /api/v1/namespaces/proxy-8199/pods/https:proxy-service-nnxtg-v6p72:460/proxy/: tls baz (200; 3.132985ms) Apr 8 21:18:11.028: INFO: (3) /api/v1/namespaces/proxy-8199/pods/proxy-service-nnxtg-v6p72:1080/proxy/: test<... (200; 3.98702ms) Apr 8 21:18:11.029: INFO: (3) /api/v1/namespaces/proxy-8199/pods/https:proxy-service-nnxtg-v6p72:443/proxy/: test (200; 4.674028ms) Apr 8 21:18:11.029: INFO: (3) /api/v1/namespaces/proxy-8199/pods/http:proxy-service-nnxtg-v6p72:162/proxy/: bar (200; 4.718237ms) Apr 8 21:18:11.029: INFO: (3) /api/v1/namespaces/proxy-8199/services/https:proxy-service-nnxtg:tlsportname2/proxy/: tls qux (200; 4.760366ms) Apr 8 21:18:11.029: INFO: (3) /api/v1/namespaces/proxy-8199/pods/proxy-service-nnxtg-v6p72:162/proxy/: bar (200; 4.84353ms) Apr 8 21:18:11.029: INFO: (3) /api/v1/namespaces/proxy-8199/pods/http:proxy-service-nnxtg-v6p72:1080/proxy/: ... (200; 5.001223ms) Apr 8 21:18:11.030: INFO: (3) /api/v1/namespaces/proxy-8199/pods/http:proxy-service-nnxtg-v6p72:160/proxy/: foo (200; 5.044571ms) Apr 8 21:18:11.030: INFO: (3) /api/v1/namespaces/proxy-8199/pods/proxy-service-nnxtg-v6p72:160/proxy/: foo (200; 5.061794ms) Apr 8 21:18:11.033: INFO: (4) /api/v1/namespaces/proxy-8199/pods/https:proxy-service-nnxtg-v6p72:460/proxy/: tls baz (200; 3.794656ms) Apr 8 21:18:11.034: INFO: (4) /api/v1/namespaces/proxy-8199/pods/proxy-service-nnxtg-v6p72/proxy/: test (200; 3.885661ms) Apr 8 21:18:11.034: INFO: (4) /api/v1/namespaces/proxy-8199/pods/proxy-service-nnxtg-v6p72:1080/proxy/: test<... (200; 3.856407ms) Apr 8 21:18:11.034: INFO: (4) /api/v1/namespaces/proxy-8199/pods/proxy-service-nnxtg-v6p72:160/proxy/: foo (200; 3.96763ms) Apr 8 21:18:11.034: INFO: (4) /api/v1/namespaces/proxy-8199/pods/http:proxy-service-nnxtg-v6p72:160/proxy/: foo (200; 4.016974ms) Apr 8 21:18:11.034: INFO: (4) /api/v1/namespaces/proxy-8199/pods/http:proxy-service-nnxtg-v6p72:1080/proxy/: ... (200; 4.294831ms) Apr 8 21:18:11.035: INFO: (4) /api/v1/namespaces/proxy-8199/services/proxy-service-nnxtg:portname1/proxy/: foo (200; 5.417332ms) Apr 8 21:18:11.035: INFO: (4) /api/v1/namespaces/proxy-8199/services/https:proxy-service-nnxtg:tlsportname1/proxy/: tls baz (200; 5.371319ms) Apr 8 21:18:11.035: INFO: (4) /api/v1/namespaces/proxy-8199/pods/proxy-service-nnxtg-v6p72:162/proxy/: bar (200; 5.399061ms) Apr 8 21:18:11.035: INFO: (4) /api/v1/namespaces/proxy-8199/pods/https:proxy-service-nnxtg-v6p72:443/proxy/: test<... (200; 5.264109ms) Apr 8 21:18:11.041: INFO: (5) /api/v1/namespaces/proxy-8199/pods/proxy-service-nnxtg-v6p72/proxy/: test (200; 5.282243ms) Apr 8 21:18:11.041: INFO: (5) /api/v1/namespaces/proxy-8199/pods/http:proxy-service-nnxtg-v6p72:162/proxy/: bar (200; 5.258669ms) Apr 8 21:18:11.041: INFO: (5) /api/v1/namespaces/proxy-8199/pods/proxy-service-nnxtg-v6p72:162/proxy/: bar (200; 5.489781ms) Apr 8 21:18:11.041: INFO: (5) /api/v1/namespaces/proxy-8199/pods/proxy-service-nnxtg-v6p72:160/proxy/: foo (200; 5.564446ms) Apr 8 21:18:11.041: INFO: (5) /api/v1/namespaces/proxy-8199/services/https:proxy-service-nnxtg:tlsportname1/proxy/: tls baz (200; 5.509309ms) Apr 8 21:18:11.041: INFO: (5) /api/v1/namespaces/proxy-8199/pods/https:proxy-service-nnxtg-v6p72:443/proxy/: ... (200; 5.957436ms) Apr 8 21:18:11.042: INFO: (5) /api/v1/namespaces/proxy-8199/services/http:proxy-service-nnxtg:portname1/proxy/: foo (200; 6.14946ms) Apr 8 21:18:11.042: INFO: (5) /api/v1/namespaces/proxy-8199/pods/https:proxy-service-nnxtg-v6p72:462/proxy/: tls qux (200; 6.11162ms) Apr 8 21:18:11.045: INFO: (6) /api/v1/namespaces/proxy-8199/pods/proxy-service-nnxtg-v6p72/proxy/: test (200; 2.88623ms) Apr 8 21:18:11.045: INFO: (6) /api/v1/namespaces/proxy-8199/pods/http:proxy-service-nnxtg-v6p72:162/proxy/: bar (200; 2.919503ms) Apr 8 21:18:11.045: INFO: (6) /api/v1/namespaces/proxy-8199/pods/https:proxy-service-nnxtg-v6p72:443/proxy/: test<... (200; 3.586441ms) Apr 8 21:18:11.046: INFO: (6) /api/v1/namespaces/proxy-8199/pods/https:proxy-service-nnxtg-v6p72:460/proxy/: tls baz (200; 3.60739ms) Apr 8 21:18:11.046: INFO: (6) /api/v1/namespaces/proxy-8199/pods/http:proxy-service-nnxtg-v6p72:1080/proxy/: ... (200; 3.621984ms) Apr 8 21:18:11.046: INFO: (6) /api/v1/namespaces/proxy-8199/pods/proxy-service-nnxtg-v6p72:162/proxy/: bar (200; 3.622959ms) Apr 8 21:18:11.046: INFO: (6) /api/v1/namespaces/proxy-8199/services/https:proxy-service-nnxtg:tlsportname1/proxy/: tls baz (200; 3.862051ms) Apr 8 21:18:11.050: INFO: (6) /api/v1/namespaces/proxy-8199/services/proxy-service-nnxtg:portname2/proxy/: bar (200; 7.647759ms) Apr 8 21:18:11.050: INFO: (6) /api/v1/namespaces/proxy-8199/services/proxy-service-nnxtg:portname1/proxy/: foo (200; 7.860169ms) Apr 8 21:18:11.051: INFO: (6) /api/v1/namespaces/proxy-8199/services/http:proxy-service-nnxtg:portname2/proxy/: bar (200; 8.955632ms) Apr 8 21:18:11.051: INFO: (6) /api/v1/namespaces/proxy-8199/pods/https:proxy-service-nnxtg-v6p72:462/proxy/: tls qux (200; 9.045112ms) Apr 8 21:18:11.051: INFO: (6) /api/v1/namespaces/proxy-8199/services/https:proxy-service-nnxtg:tlsportname2/proxy/: tls qux (200; 9.002825ms) Apr 8 21:18:11.051: INFO: (6) /api/v1/namespaces/proxy-8199/services/http:proxy-service-nnxtg:portname1/proxy/: foo (200; 9.324525ms) Apr 8 21:18:11.056: INFO: (7) /api/v1/namespaces/proxy-8199/pods/https:proxy-service-nnxtg-v6p72:443/proxy/: ... (200; 4.860957ms) Apr 8 21:18:11.057: INFO: (7) /api/v1/namespaces/proxy-8199/pods/proxy-service-nnxtg-v6p72:162/proxy/: bar (200; 5.211344ms) Apr 8 21:18:11.057: INFO: (7) /api/v1/namespaces/proxy-8199/pods/http:proxy-service-nnxtg-v6p72:160/proxy/: foo (200; 5.157237ms) Apr 8 21:18:11.057: INFO: (7) /api/v1/namespaces/proxy-8199/services/http:proxy-service-nnxtg:portname1/proxy/: foo (200; 5.579672ms) Apr 8 21:18:11.057: INFO: (7) /api/v1/namespaces/proxy-8199/pods/proxy-service-nnxtg-v6p72:1080/proxy/: test<... (200; 5.57675ms) Apr 8 21:18:11.057: INFO: (7) /api/v1/namespaces/proxy-8199/pods/http:proxy-service-nnxtg-v6p72:162/proxy/: bar (200; 5.704276ms) Apr 8 21:18:11.058: INFO: (7) /api/v1/namespaces/proxy-8199/pods/proxy-service-nnxtg-v6p72:160/proxy/: foo (200; 6.034297ms) Apr 8 21:18:11.058: INFO: (7) /api/v1/namespaces/proxy-8199/pods/https:proxy-service-nnxtg-v6p72:462/proxy/: tls qux (200; 6.141443ms) Apr 8 21:18:11.058: INFO: (7) /api/v1/namespaces/proxy-8199/pods/proxy-service-nnxtg-v6p72/proxy/: test (200; 6.21295ms) Apr 8 21:18:11.058: INFO: (7) /api/v1/namespaces/proxy-8199/services/http:proxy-service-nnxtg:portname2/proxy/: bar (200; 6.286389ms) Apr 8 21:18:11.058: INFO: (7) /api/v1/namespaces/proxy-8199/services/proxy-service-nnxtg:portname1/proxy/: foo (200; 6.467898ms) Apr 8 21:18:11.058: INFO: (7) /api/v1/namespaces/proxy-8199/services/https:proxy-service-nnxtg:tlsportname2/proxy/: tls qux (200; 6.4054ms) Apr 8 21:18:11.058: INFO: (7) /api/v1/namespaces/proxy-8199/services/https:proxy-service-nnxtg:tlsportname1/proxy/: tls baz (200; 6.398026ms) Apr 8 21:18:11.058: INFO: (7) /api/v1/namespaces/proxy-8199/pods/https:proxy-service-nnxtg-v6p72:460/proxy/: tls baz (200; 6.638461ms) Apr 8 21:18:11.058: INFO: (7) /api/v1/namespaces/proxy-8199/services/proxy-service-nnxtg:portname2/proxy/: bar (200; 6.695078ms) Apr 8 21:18:11.060: INFO: (8) /api/v1/namespaces/proxy-8199/pods/http:proxy-service-nnxtg-v6p72:162/proxy/: bar (200; 1.873926ms) Apr 8 21:18:11.063: INFO: (8) /api/v1/namespaces/proxy-8199/services/http:proxy-service-nnxtg:portname1/proxy/: foo (200; 4.343017ms) Apr 8 21:18:11.064: INFO: (8) /api/v1/namespaces/proxy-8199/pods/proxy-service-nnxtg-v6p72/proxy/: test (200; 5.163737ms) Apr 8 21:18:11.064: INFO: (8) /api/v1/namespaces/proxy-8199/pods/http:proxy-service-nnxtg-v6p72:160/proxy/: foo (200; 5.386553ms) Apr 8 21:18:11.064: INFO: (8) /api/v1/namespaces/proxy-8199/services/proxy-service-nnxtg:portname2/proxy/: bar (200; 5.392764ms) Apr 8 21:18:11.064: INFO: (8) /api/v1/namespaces/proxy-8199/pods/http:proxy-service-nnxtg-v6p72:1080/proxy/: ... (200; 5.553201ms) Apr 8 21:18:11.064: INFO: (8) /api/v1/namespaces/proxy-8199/services/proxy-service-nnxtg:portname1/proxy/: foo (200; 5.40221ms) Apr 8 21:18:11.064: INFO: (8) /api/v1/namespaces/proxy-8199/services/http:proxy-service-nnxtg:portname2/proxy/: bar (200; 5.353327ms) Apr 8 21:18:11.064: INFO: (8) /api/v1/namespaces/proxy-8199/pods/https:proxy-service-nnxtg-v6p72:443/proxy/: test<... (200; 5.452988ms) Apr 8 21:18:11.064: INFO: (8) /api/v1/namespaces/proxy-8199/services/https:proxy-service-nnxtg:tlsportname2/proxy/: tls qux (200; 5.544769ms) Apr 8 21:18:11.064: INFO: (8) /api/v1/namespaces/proxy-8199/pods/https:proxy-service-nnxtg-v6p72:462/proxy/: tls qux (200; 5.654056ms) Apr 8 21:18:11.064: INFO: (8) /api/v1/namespaces/proxy-8199/pods/proxy-service-nnxtg-v6p72:162/proxy/: bar (200; 5.859286ms) Apr 8 21:18:11.067: INFO: (9) /api/v1/namespaces/proxy-8199/pods/proxy-service-nnxtg-v6p72:162/proxy/: bar (200; 2.60858ms) Apr 8 21:18:11.067: INFO: (9) /api/v1/namespaces/proxy-8199/pods/https:proxy-service-nnxtg-v6p72:460/proxy/: tls baz (200; 2.867899ms) Apr 8 21:18:11.068: INFO: (9) /api/v1/namespaces/proxy-8199/pods/http:proxy-service-nnxtg-v6p72:162/proxy/: bar (200; 3.561705ms) Apr 8 21:18:11.068: INFO: (9) /api/v1/namespaces/proxy-8199/pods/proxy-service-nnxtg-v6p72:1080/proxy/: test<... (200; 3.693541ms) Apr 8 21:18:11.068: INFO: (9) /api/v1/namespaces/proxy-8199/services/https:proxy-service-nnxtg:tlsportname1/proxy/: tls baz (200; 3.751614ms) Apr 8 21:18:11.068: INFO: (9) /api/v1/namespaces/proxy-8199/pods/proxy-service-nnxtg-v6p72/proxy/: test (200; 3.723479ms) Apr 8 21:18:11.068: INFO: (9) /api/v1/namespaces/proxy-8199/pods/https:proxy-service-nnxtg-v6p72:443/proxy/: ... (200; 4.148549ms) Apr 8 21:18:11.069: INFO: (9) /api/v1/namespaces/proxy-8199/pods/http:proxy-service-nnxtg-v6p72:160/proxy/: foo (200; 4.688171ms) Apr 8 21:18:11.069: INFO: (9) /api/v1/namespaces/proxy-8199/pods/https:proxy-service-nnxtg-v6p72:462/proxy/: tls qux (200; 4.833302ms) Apr 8 21:18:11.069: INFO: (9) /api/v1/namespaces/proxy-8199/pods/proxy-service-nnxtg-v6p72:160/proxy/: foo (200; 4.857382ms) Apr 8 21:18:11.069: INFO: (9) /api/v1/namespaces/proxy-8199/services/http:proxy-service-nnxtg:portname2/proxy/: bar (200; 4.773421ms) Apr 8 21:18:11.069: INFO: (9) /api/v1/namespaces/proxy-8199/services/proxy-service-nnxtg:portname1/proxy/: foo (200; 4.959466ms) Apr 8 21:18:11.069: INFO: (9) /api/v1/namespaces/proxy-8199/services/proxy-service-nnxtg:portname2/proxy/: bar (200; 4.967444ms) Apr 8 21:18:11.069: INFO: (9) /api/v1/namespaces/proxy-8199/services/http:proxy-service-nnxtg:portname1/proxy/: foo (200; 5.02067ms) Apr 8 21:18:11.070: INFO: (9) /api/v1/namespaces/proxy-8199/services/https:proxy-service-nnxtg:tlsportname2/proxy/: tls qux (200; 5.474955ms) Apr 8 21:18:11.073: INFO: (10) /api/v1/namespaces/proxy-8199/pods/https:proxy-service-nnxtg-v6p72:460/proxy/: tls baz (200; 2.854872ms) Apr 8 21:18:11.073: INFO: (10) /api/v1/namespaces/proxy-8199/pods/proxy-service-nnxtg-v6p72:162/proxy/: bar (200; 2.942669ms) Apr 8 21:18:11.073: INFO: (10) /api/v1/namespaces/proxy-8199/pods/proxy-service-nnxtg-v6p72:1080/proxy/: test<... (200; 3.20231ms) Apr 8 21:18:11.073: INFO: (10) /api/v1/namespaces/proxy-8199/pods/http:proxy-service-nnxtg-v6p72:160/proxy/: foo (200; 3.291448ms) Apr 8 21:18:11.073: INFO: (10) /api/v1/namespaces/proxy-8199/pods/https:proxy-service-nnxtg-v6p72:443/proxy/: ... (200; 3.352064ms) Apr 8 21:18:11.073: INFO: (10) /api/v1/namespaces/proxy-8199/pods/https:proxy-service-nnxtg-v6p72:462/proxy/: tls qux (200; 3.477979ms) Apr 8 21:18:11.073: INFO: (10) /api/v1/namespaces/proxy-8199/pods/proxy-service-nnxtg-v6p72:160/proxy/: foo (200; 3.460938ms) Apr 8 21:18:11.073: INFO: (10) /api/v1/namespaces/proxy-8199/pods/http:proxy-service-nnxtg-v6p72:162/proxy/: bar (200; 3.437835ms) Apr 8 21:18:11.073: INFO: (10) /api/v1/namespaces/proxy-8199/pods/proxy-service-nnxtg-v6p72/proxy/: test (200; 3.495486ms) Apr 8 21:18:11.074: INFO: (10) /api/v1/namespaces/proxy-8199/services/http:proxy-service-nnxtg:portname2/proxy/: bar (200; 4.294603ms) Apr 8 21:18:11.074: INFO: (10) /api/v1/namespaces/proxy-8199/services/proxy-service-nnxtg:portname1/proxy/: foo (200; 4.257293ms) Apr 8 21:18:11.074: INFO: (10) /api/v1/namespaces/proxy-8199/services/https:proxy-service-nnxtg:tlsportname1/proxy/: tls baz (200; 4.243854ms) Apr 8 21:18:11.074: INFO: (10) /api/v1/namespaces/proxy-8199/services/https:proxy-service-nnxtg:tlsportname2/proxy/: tls qux (200; 4.40282ms) Apr 8 21:18:11.074: INFO: (10) /api/v1/namespaces/proxy-8199/services/http:proxy-service-nnxtg:portname1/proxy/: foo (200; 4.482021ms) Apr 8 21:18:11.075: INFO: (10) /api/v1/namespaces/proxy-8199/services/proxy-service-nnxtg:portname2/proxy/: bar (200; 4.799404ms) Apr 8 21:18:11.080: INFO: (11) /api/v1/namespaces/proxy-8199/services/proxy-service-nnxtg:portname2/proxy/: bar (200; 5.708726ms) Apr 8 21:18:11.080: INFO: (11) /api/v1/namespaces/proxy-8199/services/proxy-service-nnxtg:portname1/proxy/: foo (200; 5.663663ms) Apr 8 21:18:11.080: INFO: (11) /api/v1/namespaces/proxy-8199/services/http:proxy-service-nnxtg:portname2/proxy/: bar (200; 5.841485ms) Apr 8 21:18:11.081: INFO: (11) /api/v1/namespaces/proxy-8199/services/https:proxy-service-nnxtg:tlsportname2/proxy/: tls qux (200; 5.845732ms) Apr 8 21:18:11.081: INFO: (11) /api/v1/namespaces/proxy-8199/pods/proxy-service-nnxtg-v6p72/proxy/: test (200; 5.824709ms) Apr 8 21:18:11.081: INFO: (11) /api/v1/namespaces/proxy-8199/services/http:proxy-service-nnxtg:portname1/proxy/: foo (200; 5.843009ms) Apr 8 21:18:11.081: INFO: (11) /api/v1/namespaces/proxy-8199/pods/proxy-service-nnxtg-v6p72:1080/proxy/: test<... (200; 5.83019ms) Apr 8 21:18:11.081: INFO: (11) /api/v1/namespaces/proxy-8199/services/https:proxy-service-nnxtg:tlsportname1/proxy/: tls baz (200; 5.914031ms) Apr 8 21:18:11.081: INFO: (11) /api/v1/namespaces/proxy-8199/pods/https:proxy-service-nnxtg-v6p72:443/proxy/: ... (200; 6.555175ms) Apr 8 21:18:11.081: INFO: (11) /api/v1/namespaces/proxy-8199/pods/proxy-service-nnxtg-v6p72:160/proxy/: foo (200; 6.583263ms) Apr 8 21:18:11.081: INFO: (11) /api/v1/namespaces/proxy-8199/pods/http:proxy-service-nnxtg-v6p72:160/proxy/: foo (200; 6.548937ms) Apr 8 21:18:11.081: INFO: (11) /api/v1/namespaces/proxy-8199/pods/http:proxy-service-nnxtg-v6p72:162/proxy/: bar (200; 6.539522ms) Apr 8 21:18:11.081: INFO: (11) /api/v1/namespaces/proxy-8199/pods/proxy-service-nnxtg-v6p72:162/proxy/: bar (200; 6.575131ms) Apr 8 21:18:11.081: INFO: (11) /api/v1/namespaces/proxy-8199/pods/https:proxy-service-nnxtg-v6p72:460/proxy/: tls baz (200; 6.559236ms) Apr 8 21:18:11.081: INFO: (11) /api/v1/namespaces/proxy-8199/pods/https:proxy-service-nnxtg-v6p72:462/proxy/: tls qux (200; 6.604538ms) Apr 8 21:18:11.084: INFO: (12) /api/v1/namespaces/proxy-8199/pods/https:proxy-service-nnxtg-v6p72:462/proxy/: tls qux (200; 2.491388ms) Apr 8 21:18:11.084: INFO: (12) /api/v1/namespaces/proxy-8199/pods/https:proxy-service-nnxtg-v6p72:460/proxy/: tls baz (200; 2.609919ms) Apr 8 21:18:11.084: INFO: (12) /api/v1/namespaces/proxy-8199/pods/http:proxy-service-nnxtg-v6p72:160/proxy/: foo (200; 3.005487ms) Apr 8 21:18:11.085: INFO: (12) /api/v1/namespaces/proxy-8199/pods/proxy-service-nnxtg-v6p72:162/proxy/: bar (200; 3.325041ms) Apr 8 21:18:11.085: INFO: (12) /api/v1/namespaces/proxy-8199/pods/proxy-service-nnxtg-v6p72:1080/proxy/: test<... (200; 3.335118ms) Apr 8 21:18:11.085: INFO: (12) /api/v1/namespaces/proxy-8199/pods/proxy-service-nnxtg-v6p72/proxy/: test (200; 3.392751ms) Apr 8 21:18:11.085: INFO: (12) /api/v1/namespaces/proxy-8199/pods/https:proxy-service-nnxtg-v6p72:443/proxy/: ... (200; 4.343639ms) Apr 8 21:18:11.086: INFO: (12) /api/v1/namespaces/proxy-8199/services/proxy-service-nnxtg:portname2/proxy/: bar (200; 5.044753ms) Apr 8 21:18:11.087: INFO: (12) /api/v1/namespaces/proxy-8199/services/proxy-service-nnxtg:portname1/proxy/: foo (200; 5.433714ms) Apr 8 21:18:11.087: INFO: (12) /api/v1/namespaces/proxy-8199/services/https:proxy-service-nnxtg:tlsportname1/proxy/: tls baz (200; 5.514313ms) Apr 8 21:18:11.087: INFO: (12) /api/v1/namespaces/proxy-8199/services/http:proxy-service-nnxtg:portname1/proxy/: foo (200; 5.481691ms) Apr 8 21:18:11.087: INFO: (12) /api/v1/namespaces/proxy-8199/services/http:proxy-service-nnxtg:portname2/proxy/: bar (200; 5.581953ms) Apr 8 21:18:11.087: INFO: (12) /api/v1/namespaces/proxy-8199/services/https:proxy-service-nnxtg:tlsportname2/proxy/: tls qux (200; 5.81701ms) Apr 8 21:18:11.114: INFO: (13) /api/v1/namespaces/proxy-8199/pods/proxy-service-nnxtg-v6p72/proxy/: test (200; 26.690716ms) Apr 8 21:18:11.114: INFO: (13) /api/v1/namespaces/proxy-8199/pods/http:proxy-service-nnxtg-v6p72:162/proxy/: bar (200; 26.745922ms) Apr 8 21:18:11.117: INFO: (13) /api/v1/namespaces/proxy-8199/pods/proxy-service-nnxtg-v6p72:160/proxy/: foo (200; 30.154892ms) Apr 8 21:18:11.133: INFO: (13) /api/v1/namespaces/proxy-8199/services/proxy-service-nnxtg:portname1/proxy/: foo (200; 45.238365ms) Apr 8 21:18:11.133: INFO: (13) /api/v1/namespaces/proxy-8199/pods/proxy-service-nnxtg-v6p72:1080/proxy/: test<... (200; 45.325756ms) Apr 8 21:18:11.133: INFO: (13) /api/v1/namespaces/proxy-8199/services/proxy-service-nnxtg:portname2/proxy/: bar (200; 45.60859ms) Apr 8 21:18:11.133: INFO: (13) /api/v1/namespaces/proxy-8199/services/http:proxy-service-nnxtg:portname1/proxy/: foo (200; 45.542295ms) Apr 8 21:18:11.133: INFO: (13) /api/v1/namespaces/proxy-8199/pods/http:proxy-service-nnxtg-v6p72:1080/proxy/: ... (200; 45.565329ms) Apr 8 21:18:11.133: INFO: (13) /api/v1/namespaces/proxy-8199/pods/proxy-service-nnxtg-v6p72:162/proxy/: bar (200; 45.567636ms) Apr 8 21:18:11.133: INFO: (13) /api/v1/namespaces/proxy-8199/services/https:proxy-service-nnxtg:tlsportname1/proxy/: tls baz (200; 45.630518ms) Apr 8 21:18:11.133: INFO: (13) /api/v1/namespaces/proxy-8199/pods/https:proxy-service-nnxtg-v6p72:460/proxy/: tls baz (200; 45.590535ms) Apr 8 21:18:11.133: INFO: (13) /api/v1/namespaces/proxy-8199/services/https:proxy-service-nnxtg:tlsportname2/proxy/: tls qux (200; 45.706279ms) Apr 8 21:18:11.133: INFO: (13) /api/v1/namespaces/proxy-8199/pods/https:proxy-service-nnxtg-v6p72:462/proxy/: tls qux (200; 45.698267ms) Apr 8 21:18:11.133: INFO: (13) /api/v1/namespaces/proxy-8199/services/http:proxy-service-nnxtg:portname2/proxy/: bar (200; 45.671502ms) Apr 8 21:18:11.133: INFO: (13) /api/v1/namespaces/proxy-8199/pods/https:proxy-service-nnxtg-v6p72:443/proxy/: ... (200; 2.442486ms) Apr 8 21:18:11.136: INFO: (14) /api/v1/namespaces/proxy-8199/pods/http:proxy-service-nnxtg-v6p72:160/proxy/: foo (200; 2.490136ms) Apr 8 21:18:11.136: INFO: (14) /api/v1/namespaces/proxy-8199/pods/proxy-service-nnxtg-v6p72:160/proxy/: foo (200; 2.710149ms) Apr 8 21:18:11.136: INFO: (14) /api/v1/namespaces/proxy-8199/pods/https:proxy-service-nnxtg-v6p72:462/proxy/: tls qux (200; 2.930415ms) Apr 8 21:18:11.136: INFO: (14) /api/v1/namespaces/proxy-8199/pods/http:proxy-service-nnxtg-v6p72:162/proxy/: bar (200; 2.963581ms) Apr 8 21:18:11.136: INFO: (14) /api/v1/namespaces/proxy-8199/services/proxy-service-nnxtg:portname2/proxy/: bar (200; 3.144165ms) Apr 8 21:18:11.136: INFO: (14) /api/v1/namespaces/proxy-8199/pods/proxy-service-nnxtg-v6p72:1080/proxy/: test<... (200; 3.161236ms) Apr 8 21:18:11.136: INFO: (14) /api/v1/namespaces/proxy-8199/pods/proxy-service-nnxtg-v6p72/proxy/: test (200; 3.204029ms) Apr 8 21:18:11.136: INFO: (14) /api/v1/namespaces/proxy-8199/pods/https:proxy-service-nnxtg-v6p72:460/proxy/: tls baz (200; 3.249253ms) Apr 8 21:18:11.136: INFO: (14) /api/v1/namespaces/proxy-8199/pods/https:proxy-service-nnxtg-v6p72:443/proxy/: test<... (200; 3.392741ms) Apr 8 21:18:11.142: INFO: (15) /api/v1/namespaces/proxy-8199/pods/https:proxy-service-nnxtg-v6p72:460/proxy/: tls baz (200; 3.421754ms) Apr 8 21:18:11.142: INFO: (15) /api/v1/namespaces/proxy-8199/pods/proxy-service-nnxtg-v6p72/proxy/: test (200; 3.445555ms) Apr 8 21:18:11.142: INFO: (15) /api/v1/namespaces/proxy-8199/services/http:proxy-service-nnxtg:portname2/proxy/: bar (200; 3.766425ms) Apr 8 21:18:11.142: INFO: (15) /api/v1/namespaces/proxy-8199/pods/http:proxy-service-nnxtg-v6p72:160/proxy/: foo (200; 3.79704ms) Apr 8 21:18:11.142: INFO: (15) /api/v1/namespaces/proxy-8199/pods/proxy-service-nnxtg-v6p72:160/proxy/: foo (200; 3.789103ms) Apr 8 21:18:11.142: INFO: (15) /api/v1/namespaces/proxy-8199/pods/https:proxy-service-nnxtg-v6p72:443/proxy/: ... (200; 3.993866ms) Apr 8 21:18:11.142: INFO: (15) /api/v1/namespaces/proxy-8199/services/proxy-service-nnxtg:portname1/proxy/: foo (200; 3.928615ms) Apr 8 21:18:11.142: INFO: (15) /api/v1/namespaces/proxy-8199/services/proxy-service-nnxtg:portname2/proxy/: bar (200; 4.013847ms) Apr 8 21:18:11.142: INFO: (15) /api/v1/namespaces/proxy-8199/services/https:proxy-service-nnxtg:tlsportname1/proxy/: tls baz (200; 4.097758ms) Apr 8 21:18:11.143: INFO: (15) /api/v1/namespaces/proxy-8199/pods/https:proxy-service-nnxtg-v6p72:462/proxy/: tls qux (200; 4.281768ms) Apr 8 21:18:11.143: INFO: (15) /api/v1/namespaces/proxy-8199/services/https:proxy-service-nnxtg:tlsportname2/proxy/: tls qux (200; 4.339837ms) Apr 8 21:18:11.149: INFO: (16) /api/v1/namespaces/proxy-8199/pods/http:proxy-service-nnxtg-v6p72:162/proxy/: bar (200; 6.448363ms) Apr 8 21:18:11.150: INFO: (16) /api/v1/namespaces/proxy-8199/pods/proxy-service-nnxtg-v6p72/proxy/: test (200; 7.147646ms) Apr 8 21:18:11.150: INFO: (16) /api/v1/namespaces/proxy-8199/pods/proxy-service-nnxtg-v6p72:160/proxy/: foo (200; 7.175316ms) Apr 8 21:18:11.150: INFO: (16) /api/v1/namespaces/proxy-8199/pods/http:proxy-service-nnxtg-v6p72:160/proxy/: foo (200; 7.229612ms) Apr 8 21:18:11.150: INFO: (16) /api/v1/namespaces/proxy-8199/pods/https:proxy-service-nnxtg-v6p72:462/proxy/: tls qux (200; 7.445725ms) Apr 8 21:18:11.150: INFO: (16) /api/v1/namespaces/proxy-8199/pods/https:proxy-service-nnxtg-v6p72:460/proxy/: tls baz (200; 7.390785ms) Apr 8 21:18:11.150: INFO: (16) /api/v1/namespaces/proxy-8199/pods/proxy-service-nnxtg-v6p72:1080/proxy/: test<... (200; 7.406328ms) Apr 8 21:18:11.150: INFO: (16) /api/v1/namespaces/proxy-8199/pods/https:proxy-service-nnxtg-v6p72:443/proxy/: ... (200; 7.543085ms) Apr 8 21:18:11.151: INFO: (16) /api/v1/namespaces/proxy-8199/services/proxy-service-nnxtg:portname2/proxy/: bar (200; 8.32756ms) Apr 8 21:18:11.151: INFO: (16) /api/v1/namespaces/proxy-8199/services/http:proxy-service-nnxtg:portname1/proxy/: foo (200; 8.390443ms) Apr 8 21:18:11.151: INFO: (16) /api/v1/namespaces/proxy-8199/services/proxy-service-nnxtg:portname1/proxy/: foo (200; 8.38226ms) Apr 8 21:18:11.151: INFO: (16) /api/v1/namespaces/proxy-8199/services/https:proxy-service-nnxtg:tlsportname1/proxy/: tls baz (200; 8.535565ms) Apr 8 21:18:11.151: INFO: (16) /api/v1/namespaces/proxy-8199/services/https:proxy-service-nnxtg:tlsportname2/proxy/: tls qux (200; 8.51303ms) Apr 8 21:18:11.151: INFO: (16) /api/v1/namespaces/proxy-8199/services/http:proxy-service-nnxtg:portname2/proxy/: bar (200; 8.481624ms) Apr 8 21:18:11.154: INFO: (17) /api/v1/namespaces/proxy-8199/pods/https:proxy-service-nnxtg-v6p72:460/proxy/: tls baz (200; 2.523903ms) Apr 8 21:18:11.154: INFO: (17) /api/v1/namespaces/proxy-8199/pods/http:proxy-service-nnxtg-v6p72:162/proxy/: bar (200; 2.749205ms) Apr 8 21:18:11.154: INFO: (17) /api/v1/namespaces/proxy-8199/pods/https:proxy-service-nnxtg-v6p72:462/proxy/: tls qux (200; 2.976031ms) Apr 8 21:18:11.154: INFO: (17) /api/v1/namespaces/proxy-8199/pods/proxy-service-nnxtg-v6p72:160/proxy/: foo (200; 3.020536ms) Apr 8 21:18:11.154: INFO: (17) /api/v1/namespaces/proxy-8199/pods/proxy-service-nnxtg-v6p72/proxy/: test (200; 3.143037ms) Apr 8 21:18:11.155: INFO: (17) /api/v1/namespaces/proxy-8199/services/proxy-service-nnxtg:portname2/proxy/: bar (200; 3.467583ms) Apr 8 21:18:11.155: INFO: (17) /api/v1/namespaces/proxy-8199/pods/http:proxy-service-nnxtg-v6p72:1080/proxy/: ... (200; 3.406479ms) Apr 8 21:18:11.155: INFO: (17) /api/v1/namespaces/proxy-8199/pods/http:proxy-service-nnxtg-v6p72:160/proxy/: foo (200; 3.468978ms) Apr 8 21:18:11.155: INFO: (17) /api/v1/namespaces/proxy-8199/pods/https:proxy-service-nnxtg-v6p72:443/proxy/: test<... (200; 3.602997ms) Apr 8 21:18:11.155: INFO: (17) /api/v1/namespaces/proxy-8199/pods/proxy-service-nnxtg-v6p72:162/proxy/: bar (200; 3.651557ms) Apr 8 21:18:11.157: INFO: (17) /api/v1/namespaces/proxy-8199/services/http:proxy-service-nnxtg:portname2/proxy/: bar (200; 5.254449ms) Apr 8 21:18:11.157: INFO: (17) /api/v1/namespaces/proxy-8199/services/https:proxy-service-nnxtg:tlsportname1/proxy/: tls baz (200; 5.214274ms) Apr 8 21:18:11.157: INFO: (17) /api/v1/namespaces/proxy-8199/services/proxy-service-nnxtg:portname1/proxy/: foo (200; 5.266199ms) Apr 8 21:18:11.157: INFO: (17) /api/v1/namespaces/proxy-8199/services/http:proxy-service-nnxtg:portname1/proxy/: foo (200; 5.599034ms) Apr 8 21:18:11.157: INFO: (17) /api/v1/namespaces/proxy-8199/services/https:proxy-service-nnxtg:tlsportname2/proxy/: tls qux (200; 5.555868ms) Apr 8 21:18:11.160: INFO: (18) /api/v1/namespaces/proxy-8199/pods/proxy-service-nnxtg-v6p72/proxy/: test (200; 3.15992ms) Apr 8 21:18:11.160: INFO: (18) /api/v1/namespaces/proxy-8199/pods/proxy-service-nnxtg-v6p72:160/proxy/: foo (200; 3.252698ms) Apr 8 21:18:11.160: INFO: (18) /api/v1/namespaces/proxy-8199/pods/https:proxy-service-nnxtg-v6p72:462/proxy/: tls qux (200; 3.316761ms) Apr 8 21:18:11.161: INFO: (18) /api/v1/namespaces/proxy-8199/pods/https:proxy-service-nnxtg-v6p72:443/proxy/: test<... (200; 3.610184ms) Apr 8 21:18:11.161: INFO: (18) /api/v1/namespaces/proxy-8199/pods/http:proxy-service-nnxtg-v6p72:1080/proxy/: ... (200; 3.689436ms) Apr 8 21:18:11.161: INFO: (18) /api/v1/namespaces/proxy-8199/pods/http:proxy-service-nnxtg-v6p72:160/proxy/: foo (200; 3.676227ms) Apr 8 21:18:11.161: INFO: (18) /api/v1/namespaces/proxy-8199/pods/http:proxy-service-nnxtg-v6p72:162/proxy/: bar (200; 3.698152ms) Apr 8 21:18:11.161: INFO: (18) /api/v1/namespaces/proxy-8199/pods/https:proxy-service-nnxtg-v6p72:460/proxy/: tls baz (200; 3.777862ms) Apr 8 21:18:11.161: INFO: (18) /api/v1/namespaces/proxy-8199/services/https:proxy-service-nnxtg:tlsportname1/proxy/: tls baz (200; 3.841561ms) Apr 8 21:18:11.161: INFO: (18) /api/v1/namespaces/proxy-8199/services/http:proxy-service-nnxtg:portname1/proxy/: foo (200; 3.925865ms) Apr 8 21:18:11.161: INFO: (18) /api/v1/namespaces/proxy-8199/services/http:proxy-service-nnxtg:portname2/proxy/: bar (200; 4.252869ms) Apr 8 21:18:11.161: INFO: (18) /api/v1/namespaces/proxy-8199/services/https:proxy-service-nnxtg:tlsportname2/proxy/: tls qux (200; 4.183437ms) Apr 8 21:18:11.161: INFO: (18) /api/v1/namespaces/proxy-8199/services/proxy-service-nnxtg:portname1/proxy/: foo (200; 4.230856ms) Apr 8 21:18:11.161: INFO: (18) /api/v1/namespaces/proxy-8199/services/proxy-service-nnxtg:portname2/proxy/: bar (200; 4.327909ms) Apr 8 21:18:11.163: INFO: (19) /api/v1/namespaces/proxy-8199/pods/proxy-service-nnxtg-v6p72/proxy/: test (200; 1.687642ms) Apr 8 21:18:11.163: INFO: (19) /api/v1/namespaces/proxy-8199/pods/http:proxy-service-nnxtg-v6p72:160/proxy/: foo (200; 1.701854ms) Apr 8 21:18:11.165: INFO: (19) /api/v1/namespaces/proxy-8199/pods/proxy-service-nnxtg-v6p72:1080/proxy/: test<... (200; 3.163161ms) Apr 8 21:18:11.165: INFO: (19) /api/v1/namespaces/proxy-8199/pods/http:proxy-service-nnxtg-v6p72:162/proxy/: bar (200; 3.293484ms) Apr 8 21:18:11.165: INFO: (19) /api/v1/namespaces/proxy-8199/pods/proxy-service-nnxtg-v6p72:160/proxy/: foo (200; 3.449359ms) Apr 8 21:18:11.165: INFO: (19) /api/v1/namespaces/proxy-8199/pods/https:proxy-service-nnxtg-v6p72:460/proxy/: tls baz (200; 3.495666ms) Apr 8 21:18:11.165: INFO: (19) /api/v1/namespaces/proxy-8199/pods/proxy-service-nnxtg-v6p72:162/proxy/: bar (200; 3.473976ms) Apr 8 21:18:11.165: INFO: (19) /api/v1/namespaces/proxy-8199/pods/https:proxy-service-nnxtg-v6p72:443/proxy/: ... (200; 3.612457ms) Apr 8 21:18:11.165: INFO: (19) /api/v1/namespaces/proxy-8199/services/proxy-service-nnxtg:portname2/proxy/: bar (200; 3.753595ms) Apr 8 21:18:11.165: INFO: (19) /api/v1/namespaces/proxy-8199/pods/https:proxy-service-nnxtg-v6p72:462/proxy/: tls qux (200; 3.676019ms) Apr 8 21:18:11.165: INFO: (19) /api/v1/namespaces/proxy-8199/services/https:proxy-service-nnxtg:tlsportname1/proxy/: tls baz (200; 3.843651ms) Apr 8 21:18:11.167: INFO: (19) /api/v1/namespaces/proxy-8199/services/http:proxy-service-nnxtg:portname1/proxy/: foo (200; 5.199636ms) Apr 8 21:18:11.167: INFO: (19) /api/v1/namespaces/proxy-8199/services/proxy-service-nnxtg:portname1/proxy/: foo (200; 5.402146ms) Apr 8 21:18:11.167: INFO: (19) /api/v1/namespaces/proxy-8199/services/http:proxy-service-nnxtg:portname2/proxy/: bar (200; 5.401898ms) Apr 8 21:18:11.167: INFO: (19) /api/v1/namespaces/proxy-8199/services/https:proxy-service-nnxtg:tlsportname2/proxy/: tls qux (200; 5.449529ms) STEP: deleting ReplicationController proxy-service-nnxtg in namespace proxy-8199, will wait for the garbage collector to delete the pods Apr 8 21:18:11.224: INFO: Deleting ReplicationController proxy-service-nnxtg took: 5.549424ms Apr 8 21:18:11.524: INFO: Terminating ReplicationController proxy-service-nnxtg pods took: 300.242592ms [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:18:13.525: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-8199" for this suite. • [SLOW TEST:15.743 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:57 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance]","total":278,"completed":36,"skipped":626,"failed":0} SSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:18:13.532: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 8 21:18:13.577: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9e6afefd-bbd6-4cf5-b17c-8815b353776c" in namespace "downward-api-7071" to be "success or failure" Apr 8 21:18:13.625: INFO: Pod "downwardapi-volume-9e6afefd-bbd6-4cf5-b17c-8815b353776c": Phase="Pending", Reason="", readiness=false. Elapsed: 48.11084ms Apr 8 21:18:15.636: INFO: Pod "downwardapi-volume-9e6afefd-bbd6-4cf5-b17c-8815b353776c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059190557s Apr 8 21:18:17.641: INFO: Pod "downwardapi-volume-9e6afefd-bbd6-4cf5-b17c-8815b353776c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.063783623s STEP: Saw pod success Apr 8 21:18:17.641: INFO: Pod "downwardapi-volume-9e6afefd-bbd6-4cf5-b17c-8815b353776c" satisfied condition "success or failure" Apr 8 21:18:17.644: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-9e6afefd-bbd6-4cf5-b17c-8815b353776c container client-container: STEP: delete the pod Apr 8 21:18:17.677: INFO: Waiting for pod downwardapi-volume-9e6afefd-bbd6-4cf5-b17c-8815b353776c to disappear Apr 8 21:18:17.691: INFO: Pod downwardapi-volume-9e6afefd-bbd6-4cf5-b17c-8815b353776c no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:18:17.691: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7071" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":278,"completed":37,"skipped":631,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:18:17.699: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1275 STEP: creating the pod Apr 8 21:18:17.832: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9608' Apr 8 21:18:18.080: INFO: stderr: "" Apr 8 21:18:18.080: INFO: stdout: "pod/pause created\n" Apr 8 21:18:18.080: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Apr 8 21:18:18.080: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-9608" to be "running and ready" Apr 8 21:18:18.093: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 12.265725ms Apr 8 21:18:20.096: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015868654s Apr 8 21:18:22.100: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.019657538s Apr 8 21:18:22.100: INFO: Pod "pause" satisfied condition "running and ready" Apr 8 21:18:22.100: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: adding the label testing-label with value testing-label-value to a pod Apr 8 21:18:22.100: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-9608' Apr 8 21:18:22.190: INFO: stderr: "" Apr 8 21:18:22.190: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Apr 8 21:18:22.190: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-9608' Apr 8 21:18:22.287: INFO: stderr: "" Apr 8 21:18:22.287: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s testing-label-value\n" STEP: removing the label testing-label of a pod Apr 8 21:18:22.288: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-9608' Apr 8 21:18:22.384: INFO: stderr: "" Apr 8 21:18:22.384: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Apr 8 21:18:22.384: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-9608' Apr 8 21:18:22.475: INFO: stderr: "" Apr 8 21:18:22.475: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s \n" [AfterEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1282 STEP: using delete to clean up resources Apr 8 21:18:22.475: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9608' Apr 8 21:18:22.601: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 8 21:18:22.601: INFO: stdout: "pod \"pause\" force deleted\n" Apr 8 21:18:22.602: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-9608' Apr 8 21:18:22.720: INFO: stderr: "No resources found in kubectl-9608 namespace.\n" Apr 8 21:18:22.720: INFO: stdout: "" Apr 8 21:18:22.721: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-9608 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 8 21:18:22.802: INFO: stderr: "" Apr 8 21:18:22.802: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:18:22.802: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9608" for this suite. • [SLOW TEST:5.273 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1272 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":278,"completed":38,"skipped":651,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:18:22.972: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 8 21:18:23.156: INFO: Pod name rollover-pod: Found 0 pods out of 1 Apr 8 21:18:28.159: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Apr 8 21:18:28.159: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Apr 8 21:18:30.163: INFO: Creating deployment "test-rollover-deployment" Apr 8 21:18:30.175: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Apr 8 21:18:32.181: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Apr 8 21:18:32.187: INFO: Ensure that both replica sets have 1 created replica Apr 8 21:18:32.192: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Apr 8 21:18:32.198: INFO: Updating deployment test-rollover-deployment Apr 8 21:18:32.198: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Apr 8 21:18:34.208: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Apr 8 21:18:34.215: INFO: Make sure deployment "test-rollover-deployment" is complete Apr 8 21:18:34.220: INFO: all replica sets need to contain the pod-template-hash label Apr 8 21:18:34.220: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721977510, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721977510, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721977512, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721977510, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 8 21:18:36.228: INFO: all replica sets need to contain the pod-template-hash label Apr 8 21:18:36.228: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721977510, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721977510, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721977515, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721977510, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 8 21:18:38.228: INFO: all replica sets need to contain the pod-template-hash label Apr 8 21:18:38.228: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721977510, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721977510, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721977515, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721977510, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 8 21:18:40.228: INFO: all replica sets need to contain the pod-template-hash label Apr 8 21:18:40.229: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721977510, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721977510, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721977515, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721977510, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 8 21:18:42.227: INFO: all replica sets need to contain the pod-template-hash label Apr 8 21:18:42.227: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721977510, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721977510, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721977515, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721977510, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 8 21:18:44.228: INFO: all replica sets need to contain the pod-template-hash label Apr 8 21:18:44.229: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721977510, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721977510, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721977515, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721977510, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 8 21:18:46.227: INFO: Apr 8 21:18:46.227: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Apr 8 21:18:46.235: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-2454 /apis/apps/v1/namespaces/deployment-2454/deployments/test-rollover-deployment 14184f58-0d42-4be3-88c5-a206d3f4cf90 6503136 2 2020-04-08 21:18:30 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00335ddc8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-04-08 21:18:30 +0000 UTC,LastTransitionTime:2020-04-08 21:18:30 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-574d6dfbff" has successfully progressed.,LastUpdateTime:2020-04-08 21:18:45 +0000 UTC,LastTransitionTime:2020-04-08 21:18:30 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Apr 8 21:18:46.238: INFO: New ReplicaSet "test-rollover-deployment-574d6dfbff" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-574d6dfbff deployment-2454 /apis/apps/v1/namespaces/deployment-2454/replicasets/test-rollover-deployment-574d6dfbff b13102bc-5d74-45b1-97ce-d81c1bf11e35 6503125 2 2020-04-08 21:18:32 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment 14184f58-0d42-4be3-88c5-a206d3f4cf90 0xc003b38237 0xc003b38238}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 574d6dfbff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003b382a8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Apr 8 21:18:46.238: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Apr 8 21:18:46.238: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-2454 /apis/apps/v1/namespaces/deployment-2454/replicasets/test-rollover-controller fd30e825-e9ed-4b22-96a8-4914ae9692c7 6503134 2 2020-04-08 21:18:23 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment 14184f58-0d42-4be3-88c5-a206d3f4cf90 0xc003b38167 0xc003b38168}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc003b381c8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 8 21:18:46.238: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-f6c94f66c deployment-2454 /apis/apps/v1/namespaces/deployment-2454/replicasets/test-rollover-deployment-f6c94f66c f558413f-aa94-4365-9634-0493335d7b56 6503072 2 2020-04-08 21:18:30 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment 14184f58-0d42-4be3-88c5-a206d3f4cf90 0xc003b38310 0xc003b38311}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: f6c94f66c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003b38388 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 8 21:18:46.241: INFO: Pod "test-rollover-deployment-574d6dfbff-5hlx8" is available: &Pod{ObjectMeta:{test-rollover-deployment-574d6dfbff-5hlx8 test-rollover-deployment-574d6dfbff- deployment-2454 /api/v1/namespaces/deployment-2454/pods/test-rollover-deployment-574d6dfbff-5hlx8 0d98c833-c0a5-402d-b683-7b65d1618ff6 6503093 0 2020-04-08 21:18:32 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [{apps/v1 ReplicaSet test-rollover-deployment-574d6dfbff b13102bc-5d74-45b1-97ce-d81c1bf11e35 0xc0034e5d47 0xc0034e5d48}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ks77l,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ks77l,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ks77l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 21:18:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 21:18:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 21:18:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 21:18:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.88,StartTime:2020-04-08 21:18:32 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-08 21:18:34 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://1a68329a1abeb50dc0d5dc6d5056b406b7caae1d24b6d66559932db2c01eda41,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.88,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:18:46.241: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-2454" for this suite. • [SLOW TEST:23.276 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":278,"completed":39,"skipped":664,"failed":0} [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:18:46.248: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 8 21:18:46.384: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-7727 I0408 21:18:46.409490 6 runners.go:189] Created replication controller with name: svc-latency-rc, namespace: svc-latency-7727, replica count: 1 I0408 21:18:47.459840 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0408 21:18:48.460063 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0408 21:18:49.460264 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0408 21:18:50.460462 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 8 21:18:50.592: INFO: Created: latency-svc-m9csk Apr 8 21:18:50.608: INFO: Got endpoints: latency-svc-m9csk [47.689514ms] Apr 8 21:18:50.667: INFO: Created: latency-svc-bj2hv Apr 8 21:18:50.688: INFO: Created: latency-svc-rgpnx Apr 8 21:18:50.688: INFO: Got endpoints: latency-svc-bj2hv [80.362629ms] Apr 8 21:18:50.703: INFO: Got endpoints: latency-svc-rgpnx [95.245948ms] Apr 8 21:18:50.724: INFO: Created: latency-svc-k69br Apr 8 21:18:50.740: INFO: Got endpoints: latency-svc-k69br [131.652079ms] Apr 8 21:18:50.760: INFO: Created: latency-svc-q44df Apr 8 21:18:50.799: INFO: Got endpoints: latency-svc-q44df [190.840217ms] Apr 8 21:18:50.813: INFO: Created: latency-svc-bn85r Apr 8 21:18:50.834: INFO: Got endpoints: latency-svc-bn85r [225.591339ms] Apr 8 21:18:50.849: INFO: Created: latency-svc-sqmtv Apr 8 21:18:50.867: INFO: Got endpoints: latency-svc-sqmtv [259.230643ms] Apr 8 21:18:50.892: INFO: Created: latency-svc-qssqp Apr 8 21:18:50.936: INFO: Got endpoints: latency-svc-qssqp [328.079161ms] Apr 8 21:18:50.940: INFO: Created: latency-svc-5jfxc Apr 8 21:18:50.957: INFO: Got endpoints: latency-svc-5jfxc [349.208249ms] Apr 8 21:18:50.976: INFO: Created: latency-svc-46r4m Apr 8 21:18:50.987: INFO: Got endpoints: latency-svc-46r4m [379.126342ms] Apr 8 21:18:51.011: INFO: Created: latency-svc-84gbp Apr 8 21:18:51.080: INFO: Got endpoints: latency-svc-84gbp [471.549412ms] Apr 8 21:18:51.096: INFO: Created: latency-svc-pcxk5 Apr 8 21:18:51.120: INFO: Got endpoints: latency-svc-pcxk5 [512.081184ms] Apr 8 21:18:51.121: INFO: Created: latency-svc-8kbvc Apr 8 21:18:51.144: INFO: Got endpoints: latency-svc-8kbvc [536.093878ms] Apr 8 21:18:51.168: INFO: Created: latency-svc-zjhkp Apr 8 21:18:51.212: INFO: Got endpoints: latency-svc-zjhkp [603.261961ms] Apr 8 21:18:51.227: INFO: Created: latency-svc-jpq6n Apr 8 21:18:51.235: INFO: Got endpoints: latency-svc-jpq6n [626.997151ms] Apr 8 21:18:51.258: INFO: Created: latency-svc-527mf Apr 8 21:18:51.266: INFO: Got endpoints: latency-svc-527mf [657.338394ms] Apr 8 21:18:51.287: INFO: Created: latency-svc-tx9q7 Apr 8 21:18:51.296: INFO: Got endpoints: latency-svc-tx9q7 [607.552861ms] Apr 8 21:18:51.350: INFO: Created: latency-svc-ggkhp Apr 8 21:18:51.352: INFO: Got endpoints: latency-svc-ggkhp [649.127741ms] Apr 8 21:18:51.378: INFO: Created: latency-svc-7w7hb Apr 8 21:18:51.393: INFO: Got endpoints: latency-svc-7w7hb [653.097974ms] Apr 8 21:18:51.414: INFO: Created: latency-svc-mgnf6 Apr 8 21:18:51.439: INFO: Got endpoints: latency-svc-mgnf6 [639.551736ms] Apr 8 21:18:51.499: INFO: Created: latency-svc-tnwbn Apr 8 21:18:51.506: INFO: Got endpoints: latency-svc-tnwbn [671.684817ms] Apr 8 21:18:51.527: INFO: Created: latency-svc-gl8bx Apr 8 21:18:51.538: INFO: Got endpoints: latency-svc-gl8bx [671.025494ms] Apr 8 21:18:51.582: INFO: Created: latency-svc-mqskq Apr 8 21:18:51.598: INFO: Got endpoints: latency-svc-mqskq [662.025146ms] Apr 8 21:18:51.648: INFO: Created: latency-svc-m5g62 Apr 8 21:18:51.664: INFO: Got endpoints: latency-svc-m5g62 [706.592128ms] Apr 8 21:18:51.690: INFO: Created: latency-svc-xcjh7 Apr 8 21:18:51.706: INFO: Got endpoints: latency-svc-xcjh7 [718.868395ms] Apr 8 21:18:51.775: INFO: Created: latency-svc-95lpr Apr 8 21:18:51.778: INFO: Got endpoints: latency-svc-95lpr [697.830708ms] Apr 8 21:18:51.803: INFO: Created: latency-svc-6vfss Apr 8 21:18:51.815: INFO: Got endpoints: latency-svc-6vfss [694.146681ms] Apr 8 21:18:51.858: INFO: Created: latency-svc-s4fs8 Apr 8 21:18:51.869: INFO: Got endpoints: latency-svc-s4fs8 [724.867425ms] Apr 8 21:18:51.906: INFO: Created: latency-svc-dq2lm Apr 8 21:18:51.918: INFO: Got endpoints: latency-svc-dq2lm [706.385524ms] Apr 8 21:18:51.965: INFO: Created: latency-svc-r8wpt Apr 8 21:18:51.990: INFO: Got endpoints: latency-svc-r8wpt [754.907369ms] Apr 8 21:18:52.038: INFO: Created: latency-svc-ml7lc Apr 8 21:18:52.051: INFO: Got endpoints: latency-svc-ml7lc [784.908ms] Apr 8 21:18:52.079: INFO: Created: latency-svc-fkcsr Apr 8 21:18:52.093: INFO: Got endpoints: latency-svc-fkcsr [796.822919ms] Apr 8 21:18:52.110: INFO: Created: latency-svc-59xqn Apr 8 21:18:52.125: INFO: Got endpoints: latency-svc-59xqn [772.492627ms] Apr 8 21:18:52.194: INFO: Created: latency-svc-fnjjj Apr 8 21:18:52.199: INFO: Got endpoints: latency-svc-fnjjj [806.273972ms] Apr 8 21:18:52.326: INFO: Created: latency-svc-vp5wh Apr 8 21:18:52.337: INFO: Got endpoints: latency-svc-vp5wh [898.207101ms] Apr 8 21:18:52.405: INFO: Created: latency-svc-5w5xn Apr 8 21:18:52.631: INFO: Got endpoints: latency-svc-5w5xn [1.125698191s] Apr 8 21:18:52.728: INFO: Created: latency-svc-bhngj Apr 8 21:18:52.769: INFO: Got endpoints: latency-svc-bhngj [1.230341327s] Apr 8 21:18:52.817: INFO: Created: latency-svc-9h2zw Apr 8 21:18:52.831: INFO: Got endpoints: latency-svc-9h2zw [1.233017245s] Apr 8 21:18:52.865: INFO: Created: latency-svc-7bn5j Apr 8 21:18:52.961: INFO: Got endpoints: latency-svc-7bn5j [1.296768775s] Apr 8 21:18:52.964: INFO: Created: latency-svc-vvqj9 Apr 8 21:18:52.984: INFO: Got endpoints: latency-svc-vvqj9 [1.277171701s] Apr 8 21:18:53.035: INFO: Created: latency-svc-lmsq9 Apr 8 21:18:53.146: INFO: Got endpoints: latency-svc-lmsq9 [1.368051758s] Apr 8 21:18:53.154: INFO: Created: latency-svc-hm7wk Apr 8 21:18:53.170: INFO: Got endpoints: latency-svc-hm7wk [1.355150829s] Apr 8 21:18:53.226: INFO: Created: latency-svc-fks9v Apr 8 21:18:53.284: INFO: Got endpoints: latency-svc-fks9v [1.414825616s] Apr 8 21:18:53.316: INFO: Created: latency-svc-t6px4 Apr 8 21:18:53.332: INFO: Got endpoints: latency-svc-t6px4 [1.414067987s] Apr 8 21:18:53.376: INFO: Created: latency-svc-9pmdg Apr 8 21:18:53.409: INFO: Got endpoints: latency-svc-9pmdg [1.418781623s] Apr 8 21:18:53.435: INFO: Created: latency-svc-d4qts Apr 8 21:18:53.446: INFO: Got endpoints: latency-svc-d4qts [1.395756523s] Apr 8 21:18:53.491: INFO: Created: latency-svc-vsrf8 Apr 8 21:18:53.507: INFO: Got endpoints: latency-svc-vsrf8 [1.413812493s] Apr 8 21:18:53.551: INFO: Created: latency-svc-gdgqq Apr 8 21:18:53.567: INFO: Got endpoints: latency-svc-gdgqq [1.442281663s] Apr 8 21:18:53.586: INFO: Created: latency-svc-r5wzg Apr 8 21:18:53.611: INFO: Got endpoints: latency-svc-r5wzg [1.411235808s] Apr 8 21:18:53.667: INFO: Created: latency-svc-qtcw8 Apr 8 21:18:53.675: INFO: Got endpoints: latency-svc-qtcw8 [1.338501715s] Apr 8 21:18:53.706: INFO: Created: latency-svc-bqs52 Apr 8 21:18:53.730: INFO: Got endpoints: latency-svc-bqs52 [1.098757658s] Apr 8 21:18:53.760: INFO: Created: latency-svc-kzqtg Apr 8 21:18:53.793: INFO: Got endpoints: latency-svc-kzqtg [1.023819479s] Apr 8 21:18:53.820: INFO: Created: latency-svc-svf4f Apr 8 21:18:53.833: INFO: Got endpoints: latency-svc-svf4f [1.001461622s] Apr 8 21:18:53.873: INFO: Created: latency-svc-rd8zh Apr 8 21:18:53.887: INFO: Got endpoints: latency-svc-rd8zh [926.390679ms] Apr 8 21:18:53.957: INFO: Created: latency-svc-27msn Apr 8 21:18:53.977: INFO: Got endpoints: latency-svc-27msn [993.924323ms] Apr 8 21:18:54.012: INFO: Created: latency-svc-szqps Apr 8 21:18:54.026: INFO: Got endpoints: latency-svc-szqps [879.758792ms] Apr 8 21:18:54.096: INFO: Created: latency-svc-stxhr Apr 8 21:18:54.110: INFO: Got endpoints: latency-svc-stxhr [940.225968ms] Apr 8 21:18:54.137: INFO: Created: latency-svc-jb9bq Apr 8 21:18:54.152: INFO: Got endpoints: latency-svc-jb9bq [867.81393ms] Apr 8 21:18:54.230: INFO: Created: latency-svc-2krqd Apr 8 21:18:54.252: INFO: Got endpoints: latency-svc-2krqd [919.453482ms] Apr 8 21:18:54.294: INFO: Created: latency-svc-shtw5 Apr 8 21:18:54.302: INFO: Got endpoints: latency-svc-shtw5 [893.083758ms] Apr 8 21:18:54.398: INFO: Created: latency-svc-bqjgp Apr 8 21:18:54.402: INFO: Got endpoints: latency-svc-bqjgp [955.382489ms] Apr 8 21:18:54.437: INFO: Created: latency-svc-967k7 Apr 8 21:18:54.453: INFO: Got endpoints: latency-svc-967k7 [945.878374ms] Apr 8 21:18:54.485: INFO: Created: latency-svc-5rxlx Apr 8 21:18:54.537: INFO: Got endpoints: latency-svc-5rxlx [969.161049ms] Apr 8 21:18:54.541: INFO: Created: latency-svc-mmgrk Apr 8 21:18:54.569: INFO: Got endpoints: latency-svc-mmgrk [958.442226ms] Apr 8 21:18:54.600: INFO: Created: latency-svc-rsklr Apr 8 21:18:54.616: INFO: Got endpoints: latency-svc-rsklr [940.431005ms] Apr 8 21:18:54.667: INFO: Created: latency-svc-l8sfq Apr 8 21:18:54.670: INFO: Got endpoints: latency-svc-l8sfq [940.134314ms] Apr 8 21:18:54.697: INFO: Created: latency-svc-snzgc Apr 8 21:18:54.713: INFO: Got endpoints: latency-svc-snzgc [919.940976ms] Apr 8 21:18:54.732: INFO: Created: latency-svc-56qqh Apr 8 21:18:54.743: INFO: Got endpoints: latency-svc-56qqh [910.063132ms] Apr 8 21:18:54.762: INFO: Created: latency-svc-zr8fz Apr 8 21:18:54.804: INFO: Got endpoints: latency-svc-zr8fz [917.111141ms] Apr 8 21:18:54.814: INFO: Created: latency-svc-mvb25 Apr 8 21:18:54.827: INFO: Got endpoints: latency-svc-mvb25 [849.369644ms] Apr 8 21:18:54.851: INFO: Created: latency-svc-55jh8 Apr 8 21:18:54.875: INFO: Got endpoints: latency-svc-55jh8 [849.612506ms] Apr 8 21:18:54.955: INFO: Created: latency-svc-c27vk Apr 8 21:18:54.959: INFO: Got endpoints: latency-svc-c27vk [848.629362ms] Apr 8 21:18:54.984: INFO: Created: latency-svc-2767n Apr 8 21:18:54.996: INFO: Got endpoints: latency-svc-2767n [843.785522ms] Apr 8 21:18:55.014: INFO: Created: latency-svc-gd82v Apr 8 21:18:55.039: INFO: Got endpoints: latency-svc-gd82v [786.999034ms] Apr 8 21:18:55.116: INFO: Created: latency-svc-gzlrv Apr 8 21:18:55.120: INFO: Got endpoints: latency-svc-gzlrv [817.516695ms] Apr 8 21:18:55.151: INFO: Created: latency-svc-j84cq Apr 8 21:18:55.165: INFO: Got endpoints: latency-svc-j84cq [763.193464ms] Apr 8 21:18:55.193: INFO: Created: latency-svc-llgj4 Apr 8 21:18:55.201: INFO: Got endpoints: latency-svc-llgj4 [748.22962ms] Apr 8 21:18:55.260: INFO: Created: latency-svc-5ljmp Apr 8 21:18:55.267: INFO: Got endpoints: latency-svc-5ljmp [730.682301ms] Apr 8 21:18:55.301: INFO: Created: latency-svc-ntcmm Apr 8 21:18:55.316: INFO: Got endpoints: latency-svc-ntcmm [746.897148ms] Apr 8 21:18:55.337: INFO: Created: latency-svc-vnjpb Apr 8 21:18:55.352: INFO: Got endpoints: latency-svc-vnjpb [735.953818ms] Apr 8 21:18:55.397: INFO: Created: latency-svc-c2rgh Apr 8 21:18:55.403: INFO: Got endpoints: latency-svc-c2rgh [732.693774ms] Apr 8 21:18:55.434: INFO: Created: latency-svc-68w94 Apr 8 21:18:55.449: INFO: Got endpoints: latency-svc-68w94 [735.812117ms] Apr 8 21:18:55.541: INFO: Created: latency-svc-xt6xp Apr 8 21:18:55.547: INFO: Got endpoints: latency-svc-xt6xp [803.712407ms] Apr 8 21:18:55.572: INFO: Created: latency-svc-zmc7l Apr 8 21:18:55.586: INFO: Got endpoints: latency-svc-zmc7l [781.900979ms] Apr 8 21:18:55.611: INFO: Created: latency-svc-j9bzc Apr 8 21:18:55.625: INFO: Got endpoints: latency-svc-j9bzc [797.928812ms] Apr 8 21:18:55.679: INFO: Created: latency-svc-qgfbq Apr 8 21:18:55.682: INFO: Got endpoints: latency-svc-qgfbq [806.650396ms] Apr 8 21:18:55.708: INFO: Created: latency-svc-kl4vb Apr 8 21:18:55.725: INFO: Got endpoints: latency-svc-kl4vb [766.385827ms] Apr 8 21:18:55.745: INFO: Created: latency-svc-gcjl2 Apr 8 21:18:55.755: INFO: Got endpoints: latency-svc-gcjl2 [759.186616ms] Apr 8 21:18:55.776: INFO: Created: latency-svc-2jdcs Apr 8 21:18:55.823: INFO: Got endpoints: latency-svc-2jdcs [783.853952ms] Apr 8 21:18:55.842: INFO: Created: latency-svc-8kvdv Apr 8 21:18:55.858: INFO: Got endpoints: latency-svc-8kvdv [738.272866ms] Apr 8 21:18:55.879: INFO: Created: latency-svc-bpqdv Apr 8 21:18:55.895: INFO: Got endpoints: latency-svc-bpqdv [729.603083ms] Apr 8 21:18:55.914: INFO: Created: latency-svc-m6kjs Apr 8 21:18:55.948: INFO: Got endpoints: latency-svc-m6kjs [746.717569ms] Apr 8 21:18:55.967: INFO: Created: latency-svc-6znfk Apr 8 21:18:55.979: INFO: Got endpoints: latency-svc-6znfk [711.648403ms] Apr 8 21:18:55.997: INFO: Created: latency-svc-skhzr Apr 8 21:18:56.010: INFO: Got endpoints: latency-svc-skhzr [693.555739ms] Apr 8 21:18:56.027: INFO: Created: latency-svc-tsk6g Apr 8 21:18:56.104: INFO: Got endpoints: latency-svc-tsk6g [751.695704ms] Apr 8 21:18:56.117: INFO: Created: latency-svc-h2m2f Apr 8 21:18:56.130: INFO: Got endpoints: latency-svc-h2m2f [726.66907ms] Apr 8 21:18:56.147: INFO: Created: latency-svc-7ctj5 Apr 8 21:18:56.163: INFO: Got endpoints: latency-svc-7ctj5 [714.73106ms] Apr 8 21:18:56.248: INFO: Created: latency-svc-blfth Apr 8 21:18:56.251: INFO: Got endpoints: latency-svc-blfth [704.178306ms] Apr 8 21:18:56.298: INFO: Created: latency-svc-mhmz2 Apr 8 21:18:56.311: INFO: Got endpoints: latency-svc-mhmz2 [724.286232ms] Apr 8 21:18:56.328: INFO: Created: latency-svc-9qx4x Apr 8 21:18:56.341: INFO: Got endpoints: latency-svc-9qx4x [716.50785ms] Apr 8 21:18:56.403: INFO: Created: latency-svc-pdmdt Apr 8 21:18:56.407: INFO: Got endpoints: latency-svc-pdmdt [725.265316ms] Apr 8 21:18:56.459: INFO: Created: latency-svc-nckh7 Apr 8 21:18:56.467: INFO: Got endpoints: latency-svc-nckh7 [742.268327ms] Apr 8 21:18:56.489: INFO: Created: latency-svc-rvhtc Apr 8 21:18:56.541: INFO: Got endpoints: latency-svc-rvhtc [785.797184ms] Apr 8 21:18:56.549: INFO: Created: latency-svc-swlzp Apr 8 21:18:56.564: INFO: Got endpoints: latency-svc-swlzp [741.359472ms] Apr 8 21:18:56.585: INFO: Created: latency-svc-mcgmj Apr 8 21:18:56.614: INFO: Got endpoints: latency-svc-mcgmj [756.271336ms] Apr 8 21:18:56.691: INFO: Created: latency-svc-zld99 Apr 8 21:18:56.694: INFO: Got endpoints: latency-svc-zld99 [798.918011ms] Apr 8 21:18:56.717: INFO: Created: latency-svc-8ckjk Apr 8 21:18:56.727: INFO: Got endpoints: latency-svc-8ckjk [779.351619ms] Apr 8 21:18:56.749: INFO: Created: latency-svc-2rxqh Apr 8 21:18:56.770: INFO: Got endpoints: latency-svc-2rxqh [790.545815ms] Apr 8 21:18:56.823: INFO: Created: latency-svc-dbv56 Apr 8 21:18:56.862: INFO: Got endpoints: latency-svc-dbv56 [852.20554ms] Apr 8 21:18:56.862: INFO: Created: latency-svc-8lntm Apr 8 21:18:56.886: INFO: Got endpoints: latency-svc-8lntm [782.363642ms] Apr 8 21:18:56.905: INFO: Created: latency-svc-q9rf2 Apr 8 21:18:56.972: INFO: Got endpoints: latency-svc-q9rf2 [842.082868ms] Apr 8 21:18:56.974: INFO: Created: latency-svc-df7h6 Apr 8 21:18:56.993: INFO: Got endpoints: latency-svc-df7h6 [829.747685ms] Apr 8 21:18:57.018: INFO: Created: latency-svc-99rmp Apr 8 21:18:57.029: INFO: Got endpoints: latency-svc-99rmp [778.019874ms] Apr 8 21:18:57.053: INFO: Created: latency-svc-qlq8r Apr 8 21:18:57.065: INFO: Got endpoints: latency-svc-qlq8r [754.412912ms] Apr 8 21:18:57.116: INFO: Created: latency-svc-vkzhh Apr 8 21:18:57.119: INFO: Got endpoints: latency-svc-vkzhh [777.261959ms] Apr 8 21:18:57.160: INFO: Created: latency-svc-4c6xm Apr 8 21:18:57.174: INFO: Got endpoints: latency-svc-4c6xm [766.783817ms] Apr 8 21:18:57.191: INFO: Created: latency-svc-mqfmf Apr 8 21:18:57.205: INFO: Got endpoints: latency-svc-mqfmf [737.527462ms] Apr 8 21:18:57.247: INFO: Created: latency-svc-2lk7h Apr 8 21:18:57.250: INFO: Got endpoints: latency-svc-2lk7h [708.957473ms] Apr 8 21:18:57.275: INFO: Created: latency-svc-cqqq6 Apr 8 21:18:57.289: INFO: Got endpoints: latency-svc-cqqq6 [724.501792ms] Apr 8 21:18:57.329: INFO: Created: latency-svc-wktzk Apr 8 21:18:57.343: INFO: Got endpoints: latency-svc-wktzk [728.913362ms] Apr 8 21:18:57.391: INFO: Created: latency-svc-6t78b Apr 8 21:18:57.419: INFO: Created: latency-svc-xfjqw Apr 8 21:18:57.419: INFO: Got endpoints: latency-svc-6t78b [725.012797ms] Apr 8 21:18:57.428: INFO: Got endpoints: latency-svc-xfjqw [700.253318ms] Apr 8 21:18:57.449: INFO: Created: latency-svc-sr6lw Apr 8 21:18:57.464: INFO: Got endpoints: latency-svc-sr6lw [694.526988ms] Apr 8 21:18:57.485: INFO: Created: latency-svc-hlmv8 Apr 8 21:18:57.523: INFO: Got endpoints: latency-svc-hlmv8 [661.357132ms] Apr 8 21:18:57.534: INFO: Created: latency-svc-7hxbb Apr 8 21:18:57.562: INFO: Got endpoints: latency-svc-7hxbb [676.016723ms] Apr 8 21:18:57.592: INFO: Created: latency-svc-p9826 Apr 8 21:18:57.603: INFO: Got endpoints: latency-svc-p9826 [631.260135ms] Apr 8 21:18:57.623: INFO: Created: latency-svc-jcrd2 Apr 8 21:18:57.685: INFO: Got endpoints: latency-svc-jcrd2 [691.562833ms] Apr 8 21:18:57.688: INFO: Created: latency-svc-dfgpf Apr 8 21:18:57.694: INFO: Got endpoints: latency-svc-dfgpf [664.438248ms] Apr 8 21:18:57.743: INFO: Created: latency-svc-dlgfc Apr 8 21:18:57.760: INFO: Got endpoints: latency-svc-dlgfc [694.686777ms] Apr 8 21:18:57.778: INFO: Created: latency-svc-nfvmd Apr 8 21:18:57.852: INFO: Got endpoints: latency-svc-nfvmd [733.351463ms] Apr 8 21:18:57.854: INFO: Created: latency-svc-kjlnc Apr 8 21:18:57.880: INFO: Got endpoints: latency-svc-kjlnc [705.887815ms] Apr 8 21:18:57.882: INFO: Created: latency-svc-4jgd2 Apr 8 21:18:57.904: INFO: Got endpoints: latency-svc-4jgd2 [699.155076ms] Apr 8 21:18:57.930: INFO: Created: latency-svc-zqq7v Apr 8 21:18:57.941: INFO: Got endpoints: latency-svc-zqq7v [690.93155ms] Apr 8 21:18:57.996: INFO: Created: latency-svc-jjmt9 Apr 8 21:18:57.999: INFO: Got endpoints: latency-svc-jjmt9 [710.52214ms] Apr 8 21:18:58.060: INFO: Created: latency-svc-gx5xg Apr 8 21:18:58.074: INFO: Got endpoints: latency-svc-gx5xg [730.494044ms] Apr 8 21:18:58.096: INFO: Created: latency-svc-78d2n Apr 8 21:18:58.134: INFO: Got endpoints: latency-svc-78d2n [714.717425ms] Apr 8 21:18:58.144: INFO: Created: latency-svc-cs974 Apr 8 21:18:58.158: INFO: Got endpoints: latency-svc-cs974 [730.540686ms] Apr 8 21:18:58.181: INFO: Created: latency-svc-4t6tg Apr 8 21:18:58.194: INFO: Got endpoints: latency-svc-4t6tg [730.013682ms] Apr 8 21:18:58.229: INFO: Created: latency-svc-rpcmg Apr 8 21:18:58.265: INFO: Got endpoints: latency-svc-rpcmg [742.152386ms] Apr 8 21:18:58.294: INFO: Created: latency-svc-2ncj6 Apr 8 21:18:58.309: INFO: Got endpoints: latency-svc-2ncj6 [746.889075ms] Apr 8 21:18:58.336: INFO: Created: latency-svc-xhfjg Apr 8 21:18:58.358: INFO: Got endpoints: latency-svc-xhfjg [754.317249ms] Apr 8 21:18:58.404: INFO: Created: latency-svc-6c5wm Apr 8 21:18:58.407: INFO: Got endpoints: latency-svc-6c5wm [721.84466ms] Apr 8 21:18:58.433: INFO: Created: latency-svc-8bblt Apr 8 21:18:58.447: INFO: Got endpoints: latency-svc-8bblt [753.887146ms] Apr 8 21:18:58.481: INFO: Created: latency-svc-98qtn Apr 8 21:18:58.541: INFO: Got endpoints: latency-svc-98qtn [781.116009ms] Apr 8 21:18:58.564: INFO: Created: latency-svc-7krl4 Apr 8 21:18:58.580: INFO: Got endpoints: latency-svc-7krl4 [727.658983ms] Apr 8 21:18:58.606: INFO: Created: latency-svc-zkvhh Apr 8 21:18:58.622: INFO: Got endpoints: latency-svc-zkvhh [742.367982ms] Apr 8 21:18:58.679: INFO: Created: latency-svc-pxf9s Apr 8 21:18:58.682: INFO: Got endpoints: latency-svc-pxf9s [777.654827ms] Apr 8 21:18:58.750: INFO: Created: latency-svc-8cjjv Apr 8 21:18:58.761: INFO: Got endpoints: latency-svc-8cjjv [820.301994ms] Apr 8 21:18:58.817: INFO: Created: latency-svc-zcx5k Apr 8 21:18:58.823: INFO: Got endpoints: latency-svc-zcx5k [824.21274ms] Apr 8 21:18:58.846: INFO: Created: latency-svc-6l65h Apr 8 21:18:58.858: INFO: Got endpoints: latency-svc-6l65h [783.841683ms] Apr 8 21:18:58.876: INFO: Created: latency-svc-28zwq Apr 8 21:18:58.901: INFO: Got endpoints: latency-svc-28zwq [767.674532ms] Apr 8 21:18:58.991: INFO: Created: latency-svc-n8d68 Apr 8 21:18:58.995: INFO: Got endpoints: latency-svc-n8d68 [836.443195ms] Apr 8 21:18:59.029: INFO: Created: latency-svc-l8fhj Apr 8 21:18:59.041: INFO: Got endpoints: latency-svc-l8fhj [847.051668ms] Apr 8 21:18:59.062: INFO: Created: latency-svc-s5lkn Apr 8 21:18:59.082: INFO: Got endpoints: latency-svc-s5lkn [816.32497ms] Apr 8 21:18:59.122: INFO: Created: latency-svc-fqnjg Apr 8 21:18:59.129: INFO: Got endpoints: latency-svc-fqnjg [819.810883ms] Apr 8 21:18:59.164: INFO: Created: latency-svc-hnbkd Apr 8 21:18:59.178: INFO: Got endpoints: latency-svc-hnbkd [819.99778ms] Apr 8 21:18:59.195: INFO: Created: latency-svc-zb4vs Apr 8 21:18:59.208: INFO: Got endpoints: latency-svc-zb4vs [801.023053ms] Apr 8 21:18:59.272: INFO: Created: latency-svc-nrmxv Apr 8 21:18:59.280: INFO: Got endpoints: latency-svc-nrmxv [832.012184ms] Apr 8 21:18:59.314: INFO: Created: latency-svc-5m76z Apr 8 21:18:59.334: INFO: Got endpoints: latency-svc-5m76z [793.446261ms] Apr 8 21:18:59.355: INFO: Created: latency-svc-f2dbh Apr 8 21:18:59.370: INFO: Got endpoints: latency-svc-f2dbh [790.434993ms] Apr 8 21:18:59.415: INFO: Created: latency-svc-xnk5f Apr 8 21:18:59.447: INFO: Got endpoints: latency-svc-xnk5f [824.232222ms] Apr 8 21:18:59.553: INFO: Created: latency-svc-b94kj Apr 8 21:18:59.556: INFO: Got endpoints: latency-svc-b94kj [874.471676ms] Apr 8 21:18:59.578: INFO: Created: latency-svc-5srwr Apr 8 21:18:59.591: INFO: Got endpoints: latency-svc-5srwr [829.782633ms] Apr 8 21:18:59.608: INFO: Created: latency-svc-n4rwv Apr 8 21:18:59.621: INFO: Got endpoints: latency-svc-n4rwv [798.018474ms] Apr 8 21:18:59.639: INFO: Created: latency-svc-4bnh2 Apr 8 21:18:59.721: INFO: Got endpoints: latency-svc-4bnh2 [862.896441ms] Apr 8 21:18:59.735: INFO: Created: latency-svc-xgm97 Apr 8 21:18:59.748: INFO: Got endpoints: latency-svc-xgm97 [846.572871ms] Apr 8 21:18:59.771: INFO: Created: latency-svc-dmnl9 Apr 8 21:18:59.784: INFO: Got endpoints: latency-svc-dmnl9 [789.703602ms] Apr 8 21:18:59.801: INFO: Created: latency-svc-6gg2w Apr 8 21:18:59.858: INFO: Got endpoints: latency-svc-6gg2w [816.807389ms] Apr 8 21:18:59.871: INFO: Created: latency-svc-w25nr Apr 8 21:18:59.893: INFO: Got endpoints: latency-svc-w25nr [811.082078ms] Apr 8 21:18:59.914: INFO: Created: latency-svc-tg4x8 Apr 8 21:18:59.923: INFO: Got endpoints: latency-svc-tg4x8 [793.977592ms] Apr 8 21:18:59.951: INFO: Created: latency-svc-2kmj9 Apr 8 21:18:59.990: INFO: Got endpoints: latency-svc-2kmj9 [812.619278ms] Apr 8 21:19:00.011: INFO: Created: latency-svc-79d8g Apr 8 21:19:00.026: INFO: Got endpoints: latency-svc-79d8g [817.897525ms] Apr 8 21:19:00.047: INFO: Created: latency-svc-kvzcv Apr 8 21:19:00.062: INFO: Got endpoints: latency-svc-kvzcv [782.334837ms] Apr 8 21:19:00.082: INFO: Created: latency-svc-9mbkw Apr 8 21:19:00.140: INFO: Got endpoints: latency-svc-9mbkw [805.239831ms] Apr 8 21:19:00.161: INFO: Created: latency-svc-pd6nx Apr 8 21:19:00.177: INFO: Got endpoints: latency-svc-pd6nx [806.397748ms] Apr 8 21:19:00.197: INFO: Created: latency-svc-zhsk5 Apr 8 21:19:00.213: INFO: Got endpoints: latency-svc-zhsk5 [765.975322ms] Apr 8 21:19:00.233: INFO: Created: latency-svc-wbmf7 Apr 8 21:19:00.302: INFO: Got endpoints: latency-svc-wbmf7 [745.318357ms] Apr 8 21:19:00.309: INFO: Created: latency-svc-bfttn Apr 8 21:19:00.333: INFO: Got endpoints: latency-svc-bfttn [741.761014ms] Apr 8 21:19:00.365: INFO: Created: latency-svc-8zr5c Apr 8 21:19:00.382: INFO: Got endpoints: latency-svc-8zr5c [760.220366ms] Apr 8 21:19:00.401: INFO: Created: latency-svc-qwtfv Apr 8 21:19:00.445: INFO: Got endpoints: latency-svc-qwtfv [724.534891ms] Apr 8 21:19:00.460: INFO: Created: latency-svc-9v8cg Apr 8 21:19:00.478: INFO: Got endpoints: latency-svc-9v8cg [730.381858ms] Apr 8 21:19:00.509: INFO: Created: latency-svc-sdsfr Apr 8 21:19:00.619: INFO: Got endpoints: latency-svc-sdsfr [834.270643ms] Apr 8 21:19:00.635: INFO: Created: latency-svc-7ghzt Apr 8 21:19:00.647: INFO: Got endpoints: latency-svc-7ghzt [788.336161ms] Apr 8 21:19:00.671: INFO: Created: latency-svc-s422q Apr 8 21:19:00.683: INFO: Got endpoints: latency-svc-s422q [789.789363ms] Apr 8 21:19:00.712: INFO: Created: latency-svc-4hnb7 Apr 8 21:19:00.757: INFO: Got endpoints: latency-svc-4hnb7 [833.978431ms] Apr 8 21:19:00.760: INFO: Created: latency-svc-hb5k8 Apr 8 21:19:00.773: INFO: Got endpoints: latency-svc-hb5k8 [782.822729ms] Apr 8 21:19:00.816: INFO: Created: latency-svc-5h4fz Apr 8 21:19:00.827: INFO: Got endpoints: latency-svc-5h4fz [801.559986ms] Apr 8 21:19:00.907: INFO: Created: latency-svc-njzpf Apr 8 21:19:00.909: INFO: Got endpoints: latency-svc-njzpf [847.481774ms] Apr 8 21:19:00.941: INFO: Created: latency-svc-99qx4 Apr 8 21:19:00.954: INFO: Got endpoints: latency-svc-99qx4 [814.325977ms] Apr 8 21:19:00.978: INFO: Created: latency-svc-zcccp Apr 8 21:19:01.001: INFO: Got endpoints: latency-svc-zcccp [824.397921ms] Apr 8 21:19:01.062: INFO: Created: latency-svc-2bgqz Apr 8 21:19:01.068: INFO: Got endpoints: latency-svc-2bgqz [855.454102ms] Apr 8 21:19:01.097: INFO: Created: latency-svc-jnkzs Apr 8 21:19:01.111: INFO: Got endpoints: latency-svc-jnkzs [808.811154ms] Apr 8 21:19:01.146: INFO: Created: latency-svc-nfwc8 Apr 8 21:19:01.159: INFO: Got endpoints: latency-svc-nfwc8 [825.922046ms] Apr 8 21:19:01.199: INFO: Created: latency-svc-v9txc Apr 8 21:19:01.213: INFO: Got endpoints: latency-svc-v9txc [831.621686ms] Apr 8 21:19:01.254: INFO: Created: latency-svc-hsd94 Apr 8 21:19:01.280: INFO: Got endpoints: latency-svc-hsd94 [834.228177ms] Apr 8 21:19:01.319: INFO: Created: latency-svc-tvdcr Apr 8 21:19:01.340: INFO: Got endpoints: latency-svc-tvdcr [861.440699ms] Apr 8 21:19:01.362: INFO: Created: latency-svc-jzpvl Apr 8 21:19:01.376: INFO: Got endpoints: latency-svc-jzpvl [757.340556ms] Apr 8 21:19:01.403: INFO: Created: latency-svc-29bjz Apr 8 21:19:01.439: INFO: Got endpoints: latency-svc-29bjz [792.529666ms] Apr 8 21:19:01.456: INFO: Created: latency-svc-8xhb8 Apr 8 21:19:01.473: INFO: Got endpoints: latency-svc-8xhb8 [790.116528ms] Apr 8 21:19:01.493: INFO: Created: latency-svc-6cmtd Apr 8 21:19:01.511: INFO: Got endpoints: latency-svc-6cmtd [754.195699ms] Apr 8 21:19:01.534: INFO: Created: latency-svc-mfwg6 Apr 8 21:19:01.565: INFO: Got endpoints: latency-svc-mfwg6 [791.671095ms] Apr 8 21:19:01.565: INFO: Latencies: [80.362629ms 95.245948ms 131.652079ms 190.840217ms 225.591339ms 259.230643ms 328.079161ms 349.208249ms 379.126342ms 471.549412ms 512.081184ms 536.093878ms 603.261961ms 607.552861ms 626.997151ms 631.260135ms 639.551736ms 649.127741ms 653.097974ms 657.338394ms 661.357132ms 662.025146ms 664.438248ms 671.025494ms 671.684817ms 676.016723ms 690.93155ms 691.562833ms 693.555739ms 694.146681ms 694.526988ms 694.686777ms 697.830708ms 699.155076ms 700.253318ms 704.178306ms 705.887815ms 706.385524ms 706.592128ms 708.957473ms 710.52214ms 711.648403ms 714.717425ms 714.73106ms 716.50785ms 718.868395ms 721.84466ms 724.286232ms 724.501792ms 724.534891ms 724.867425ms 725.012797ms 725.265316ms 726.66907ms 727.658983ms 728.913362ms 729.603083ms 730.013682ms 730.381858ms 730.494044ms 730.540686ms 730.682301ms 732.693774ms 733.351463ms 735.812117ms 735.953818ms 737.527462ms 738.272866ms 741.359472ms 741.761014ms 742.152386ms 742.268327ms 742.367982ms 745.318357ms 746.717569ms 746.889075ms 746.897148ms 748.22962ms 751.695704ms 753.887146ms 754.195699ms 754.317249ms 754.412912ms 754.907369ms 756.271336ms 757.340556ms 759.186616ms 760.220366ms 763.193464ms 765.975322ms 766.385827ms 766.783817ms 767.674532ms 772.492627ms 777.261959ms 777.654827ms 778.019874ms 779.351619ms 781.116009ms 781.900979ms 782.334837ms 782.363642ms 782.822729ms 783.841683ms 783.853952ms 784.908ms 785.797184ms 786.999034ms 788.336161ms 789.703602ms 789.789363ms 790.116528ms 790.434993ms 790.545815ms 791.671095ms 792.529666ms 793.446261ms 793.977592ms 796.822919ms 797.928812ms 798.018474ms 798.918011ms 801.023053ms 801.559986ms 803.712407ms 805.239831ms 806.273972ms 806.397748ms 806.650396ms 808.811154ms 811.082078ms 812.619278ms 814.325977ms 816.32497ms 816.807389ms 817.516695ms 817.897525ms 819.810883ms 819.99778ms 820.301994ms 824.21274ms 824.232222ms 824.397921ms 825.922046ms 829.747685ms 829.782633ms 831.621686ms 832.012184ms 833.978431ms 834.228177ms 834.270643ms 836.443195ms 842.082868ms 843.785522ms 846.572871ms 847.051668ms 847.481774ms 848.629362ms 849.369644ms 849.612506ms 852.20554ms 855.454102ms 861.440699ms 862.896441ms 867.81393ms 874.471676ms 879.758792ms 893.083758ms 898.207101ms 910.063132ms 917.111141ms 919.453482ms 919.940976ms 926.390679ms 940.134314ms 940.225968ms 940.431005ms 945.878374ms 955.382489ms 958.442226ms 969.161049ms 993.924323ms 1.001461622s 1.023819479s 1.098757658s 1.125698191s 1.230341327s 1.233017245s 1.277171701s 1.296768775s 1.338501715s 1.355150829s 1.368051758s 1.395756523s 1.411235808s 1.413812493s 1.414067987s 1.414825616s 1.418781623s 1.442281663s] Apr 8 21:19:01.565: INFO: 50 %ile: 782.334837ms Apr 8 21:19:01.565: INFO: 90 %ile: 969.161049ms Apr 8 21:19:01.565: INFO: 99 %ile: 1.418781623s Apr 8 21:19:01.565: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:19:01.565: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-7727" for this suite. • [SLOW TEST:15.325 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":278,"completed":40,"skipped":664,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:19:01.573: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9823.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-9823.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9823.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-9823.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-9823.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-9823.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-9823.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-9823.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-9823.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-9823.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-9823.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-9823.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9823.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 220.176.109.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.109.176.220_udp@PTR;check="$$(dig +tcp +noall +answer +search 220.176.109.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.109.176.220_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9823.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-9823.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9823.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-9823.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-9823.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-9823.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-9823.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-9823.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-9823.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-9823.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-9823.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-9823.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9823.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 220.176.109.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.109.176.220_udp@PTR;check="$$(dig +tcp +noall +answer +search 220.176.109.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.109.176.220_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 8 21:19:07.775: INFO: Unable to read wheezy_udp@dns-test-service.dns-9823.svc.cluster.local from pod dns-9823/dns-test-1964369f-94d7-42b7-85f4-258440e1e8ef: the server could not find the requested resource (get pods dns-test-1964369f-94d7-42b7-85f4-258440e1e8ef) Apr 8 21:19:07.779: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9823.svc.cluster.local from pod dns-9823/dns-test-1964369f-94d7-42b7-85f4-258440e1e8ef: the server could not find the requested resource (get pods dns-test-1964369f-94d7-42b7-85f4-258440e1e8ef) Apr 8 21:19:07.786: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9823.svc.cluster.local from pod dns-9823/dns-test-1964369f-94d7-42b7-85f4-258440e1e8ef: the server could not find the requested resource (get pods dns-test-1964369f-94d7-42b7-85f4-258440e1e8ef) Apr 8 21:19:07.805: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9823.svc.cluster.local from pod dns-9823/dns-test-1964369f-94d7-42b7-85f4-258440e1e8ef: the server could not find the requested resource (get pods dns-test-1964369f-94d7-42b7-85f4-258440e1e8ef) Apr 8 21:19:07.865: INFO: Unable to read jessie_udp@dns-test-service.dns-9823.svc.cluster.local from pod dns-9823/dns-test-1964369f-94d7-42b7-85f4-258440e1e8ef: the server could not find the requested resource (get pods dns-test-1964369f-94d7-42b7-85f4-258440e1e8ef) Apr 8 21:19:07.882: INFO: Unable to read jessie_tcp@dns-test-service.dns-9823.svc.cluster.local from pod dns-9823/dns-test-1964369f-94d7-42b7-85f4-258440e1e8ef: the server could not find the requested resource (get pods dns-test-1964369f-94d7-42b7-85f4-258440e1e8ef) Apr 8 21:19:07.887: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9823.svc.cluster.local from pod dns-9823/dns-test-1964369f-94d7-42b7-85f4-258440e1e8ef: the server could not find the requested resource (get pods dns-test-1964369f-94d7-42b7-85f4-258440e1e8ef) Apr 8 21:19:07.893: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9823.svc.cluster.local from pod dns-9823/dns-test-1964369f-94d7-42b7-85f4-258440e1e8ef: the server could not find the requested resource (get pods dns-test-1964369f-94d7-42b7-85f4-258440e1e8ef) Apr 8 21:19:07.953: INFO: Lookups using dns-9823/dns-test-1964369f-94d7-42b7-85f4-258440e1e8ef failed for: [wheezy_udp@dns-test-service.dns-9823.svc.cluster.local wheezy_tcp@dns-test-service.dns-9823.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9823.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9823.svc.cluster.local jessie_udp@dns-test-service.dns-9823.svc.cluster.local jessie_tcp@dns-test-service.dns-9823.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9823.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9823.svc.cluster.local] Apr 8 21:19:12.968: INFO: Unable to read wheezy_udp@dns-test-service.dns-9823.svc.cluster.local from pod dns-9823/dns-test-1964369f-94d7-42b7-85f4-258440e1e8ef: the server could not find the requested resource (get pods dns-test-1964369f-94d7-42b7-85f4-258440e1e8ef) Apr 8 21:19:12.971: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9823.svc.cluster.local from pod dns-9823/dns-test-1964369f-94d7-42b7-85f4-258440e1e8ef: the server could not find the requested resource (get pods dns-test-1964369f-94d7-42b7-85f4-258440e1e8ef) Apr 8 21:19:13.051: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9823.svc.cluster.local from pod dns-9823/dns-test-1964369f-94d7-42b7-85f4-258440e1e8ef: the server could not find the requested resource (get pods dns-test-1964369f-94d7-42b7-85f4-258440e1e8ef) Apr 8 21:19:13.064: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9823.svc.cluster.local from pod dns-9823/dns-test-1964369f-94d7-42b7-85f4-258440e1e8ef: the server could not find the requested resource (get pods dns-test-1964369f-94d7-42b7-85f4-258440e1e8ef) Apr 8 21:19:13.211: INFO: Unable to read jessie_udp@dns-test-service.dns-9823.svc.cluster.local from pod dns-9823/dns-test-1964369f-94d7-42b7-85f4-258440e1e8ef: the server could not find the requested resource (get pods dns-test-1964369f-94d7-42b7-85f4-258440e1e8ef) Apr 8 21:19:13.215: INFO: Unable to read jessie_tcp@dns-test-service.dns-9823.svc.cluster.local from pod dns-9823/dns-test-1964369f-94d7-42b7-85f4-258440e1e8ef: the server could not find the requested resource (get pods dns-test-1964369f-94d7-42b7-85f4-258440e1e8ef) Apr 8 21:19:13.218: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9823.svc.cluster.local from pod dns-9823/dns-test-1964369f-94d7-42b7-85f4-258440e1e8ef: the server could not find the requested resource (get pods dns-test-1964369f-94d7-42b7-85f4-258440e1e8ef) Apr 8 21:19:13.241: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9823.svc.cluster.local from pod dns-9823/dns-test-1964369f-94d7-42b7-85f4-258440e1e8ef: the server could not find the requested resource (get pods dns-test-1964369f-94d7-42b7-85f4-258440e1e8ef) Apr 8 21:19:13.350: INFO: Lookups using dns-9823/dns-test-1964369f-94d7-42b7-85f4-258440e1e8ef failed for: [wheezy_udp@dns-test-service.dns-9823.svc.cluster.local wheezy_tcp@dns-test-service.dns-9823.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9823.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9823.svc.cluster.local jessie_udp@dns-test-service.dns-9823.svc.cluster.local jessie_tcp@dns-test-service.dns-9823.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9823.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9823.svc.cluster.local] Apr 8 21:19:17.967: INFO: Unable to read wheezy_udp@dns-test-service.dns-9823.svc.cluster.local from pod dns-9823/dns-test-1964369f-94d7-42b7-85f4-258440e1e8ef: the server could not find the requested resource (get pods dns-test-1964369f-94d7-42b7-85f4-258440e1e8ef) Apr 8 21:19:17.975: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9823.svc.cluster.local from pod dns-9823/dns-test-1964369f-94d7-42b7-85f4-258440e1e8ef: the server could not find the requested resource (get pods dns-test-1964369f-94d7-42b7-85f4-258440e1e8ef) Apr 8 21:19:17.977: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9823.svc.cluster.local from pod dns-9823/dns-test-1964369f-94d7-42b7-85f4-258440e1e8ef: the server could not find the requested resource (get pods dns-test-1964369f-94d7-42b7-85f4-258440e1e8ef) Apr 8 21:19:17.979: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9823.svc.cluster.local from pod dns-9823/dns-test-1964369f-94d7-42b7-85f4-258440e1e8ef: the server could not find the requested resource (get pods dns-test-1964369f-94d7-42b7-85f4-258440e1e8ef) Apr 8 21:19:18.123: INFO: Unable to read jessie_udp@dns-test-service.dns-9823.svc.cluster.local from pod dns-9823/dns-test-1964369f-94d7-42b7-85f4-258440e1e8ef: the server could not find the requested resource (get pods dns-test-1964369f-94d7-42b7-85f4-258440e1e8ef) Apr 8 21:19:18.143: INFO: Unable to read jessie_tcp@dns-test-service.dns-9823.svc.cluster.local from pod dns-9823/dns-test-1964369f-94d7-42b7-85f4-258440e1e8ef: the server could not find the requested resource (get pods dns-test-1964369f-94d7-42b7-85f4-258440e1e8ef) Apr 8 21:19:18.156: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9823.svc.cluster.local from pod dns-9823/dns-test-1964369f-94d7-42b7-85f4-258440e1e8ef: the server could not find the requested resource (get pods dns-test-1964369f-94d7-42b7-85f4-258440e1e8ef) Apr 8 21:19:18.159: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9823.svc.cluster.local from pod dns-9823/dns-test-1964369f-94d7-42b7-85f4-258440e1e8ef: the server could not find the requested resource (get pods dns-test-1964369f-94d7-42b7-85f4-258440e1e8ef) Apr 8 21:19:18.323: INFO: Lookups using dns-9823/dns-test-1964369f-94d7-42b7-85f4-258440e1e8ef failed for: [wheezy_udp@dns-test-service.dns-9823.svc.cluster.local wheezy_tcp@dns-test-service.dns-9823.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9823.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9823.svc.cluster.local jessie_udp@dns-test-service.dns-9823.svc.cluster.local jessie_tcp@dns-test-service.dns-9823.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9823.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9823.svc.cluster.local] Apr 8 21:19:22.958: INFO: Unable to read wheezy_udp@dns-test-service.dns-9823.svc.cluster.local from pod dns-9823/dns-test-1964369f-94d7-42b7-85f4-258440e1e8ef: the server could not find the requested resource (get pods dns-test-1964369f-94d7-42b7-85f4-258440e1e8ef) Apr 8 21:19:22.962: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9823.svc.cluster.local from pod dns-9823/dns-test-1964369f-94d7-42b7-85f4-258440e1e8ef: the server could not find the requested resource (get pods dns-test-1964369f-94d7-42b7-85f4-258440e1e8ef) Apr 8 21:19:22.966: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9823.svc.cluster.local from pod dns-9823/dns-test-1964369f-94d7-42b7-85f4-258440e1e8ef: the server could not find the requested resource (get pods dns-test-1964369f-94d7-42b7-85f4-258440e1e8ef) Apr 8 21:19:22.969: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9823.svc.cluster.local from pod dns-9823/dns-test-1964369f-94d7-42b7-85f4-258440e1e8ef: the server could not find the requested resource (get pods dns-test-1964369f-94d7-42b7-85f4-258440e1e8ef) Apr 8 21:19:22.991: INFO: Unable to read jessie_udp@dns-test-service.dns-9823.svc.cluster.local from pod dns-9823/dns-test-1964369f-94d7-42b7-85f4-258440e1e8ef: the server could not find the requested resource (get pods dns-test-1964369f-94d7-42b7-85f4-258440e1e8ef) Apr 8 21:19:22.994: INFO: Unable to read jessie_tcp@dns-test-service.dns-9823.svc.cluster.local from pod dns-9823/dns-test-1964369f-94d7-42b7-85f4-258440e1e8ef: the server could not find the requested resource (get pods dns-test-1964369f-94d7-42b7-85f4-258440e1e8ef) Apr 8 21:19:22.997: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9823.svc.cluster.local from pod dns-9823/dns-test-1964369f-94d7-42b7-85f4-258440e1e8ef: the server could not find the requested resource (get pods dns-test-1964369f-94d7-42b7-85f4-258440e1e8ef) Apr 8 21:19:23.000: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9823.svc.cluster.local from pod dns-9823/dns-test-1964369f-94d7-42b7-85f4-258440e1e8ef: the server could not find the requested resource (get pods dns-test-1964369f-94d7-42b7-85f4-258440e1e8ef) Apr 8 21:19:23.019: INFO: Lookups using dns-9823/dns-test-1964369f-94d7-42b7-85f4-258440e1e8ef failed for: [wheezy_udp@dns-test-service.dns-9823.svc.cluster.local wheezy_tcp@dns-test-service.dns-9823.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9823.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9823.svc.cluster.local jessie_udp@dns-test-service.dns-9823.svc.cluster.local jessie_tcp@dns-test-service.dns-9823.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9823.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9823.svc.cluster.local] Apr 8 21:19:27.958: INFO: Unable to read wheezy_udp@dns-test-service.dns-9823.svc.cluster.local from pod dns-9823/dns-test-1964369f-94d7-42b7-85f4-258440e1e8ef: the server could not find the requested resource (get pods dns-test-1964369f-94d7-42b7-85f4-258440e1e8ef) Apr 8 21:19:27.962: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9823.svc.cluster.local from pod dns-9823/dns-test-1964369f-94d7-42b7-85f4-258440e1e8ef: the server could not find the requested resource (get pods dns-test-1964369f-94d7-42b7-85f4-258440e1e8ef) Apr 8 21:19:27.966: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9823.svc.cluster.local from pod dns-9823/dns-test-1964369f-94d7-42b7-85f4-258440e1e8ef: the server could not find the requested resource (get pods dns-test-1964369f-94d7-42b7-85f4-258440e1e8ef) Apr 8 21:19:27.970: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9823.svc.cluster.local from pod dns-9823/dns-test-1964369f-94d7-42b7-85f4-258440e1e8ef: the server could not find the requested resource (get pods dns-test-1964369f-94d7-42b7-85f4-258440e1e8ef) Apr 8 21:19:27.987: INFO: Unable to read jessie_udp@dns-test-service.dns-9823.svc.cluster.local from pod dns-9823/dns-test-1964369f-94d7-42b7-85f4-258440e1e8ef: the server could not find the requested resource (get pods dns-test-1964369f-94d7-42b7-85f4-258440e1e8ef) Apr 8 21:19:27.989: INFO: Unable to read jessie_tcp@dns-test-service.dns-9823.svc.cluster.local from pod dns-9823/dns-test-1964369f-94d7-42b7-85f4-258440e1e8ef: the server could not find the requested resource (get pods dns-test-1964369f-94d7-42b7-85f4-258440e1e8ef) Apr 8 21:19:27.992: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9823.svc.cluster.local from pod dns-9823/dns-test-1964369f-94d7-42b7-85f4-258440e1e8ef: the server could not find the requested resource (get pods dns-test-1964369f-94d7-42b7-85f4-258440e1e8ef) Apr 8 21:19:27.995: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9823.svc.cluster.local from pod dns-9823/dns-test-1964369f-94d7-42b7-85f4-258440e1e8ef: the server could not find the requested resource (get pods dns-test-1964369f-94d7-42b7-85f4-258440e1e8ef) Apr 8 21:19:28.012: INFO: Lookups using dns-9823/dns-test-1964369f-94d7-42b7-85f4-258440e1e8ef failed for: [wheezy_udp@dns-test-service.dns-9823.svc.cluster.local wheezy_tcp@dns-test-service.dns-9823.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9823.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9823.svc.cluster.local jessie_udp@dns-test-service.dns-9823.svc.cluster.local jessie_tcp@dns-test-service.dns-9823.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9823.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9823.svc.cluster.local] Apr 8 21:19:32.958: INFO: Unable to read wheezy_udp@dns-test-service.dns-9823.svc.cluster.local from pod dns-9823/dns-test-1964369f-94d7-42b7-85f4-258440e1e8ef: the server could not find the requested resource (get pods dns-test-1964369f-94d7-42b7-85f4-258440e1e8ef) Apr 8 21:19:32.961: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9823.svc.cluster.local from pod dns-9823/dns-test-1964369f-94d7-42b7-85f4-258440e1e8ef: the server could not find the requested resource (get pods dns-test-1964369f-94d7-42b7-85f4-258440e1e8ef) Apr 8 21:19:32.965: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9823.svc.cluster.local from pod dns-9823/dns-test-1964369f-94d7-42b7-85f4-258440e1e8ef: the server could not find the requested resource (get pods dns-test-1964369f-94d7-42b7-85f4-258440e1e8ef) Apr 8 21:19:32.968: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9823.svc.cluster.local from pod dns-9823/dns-test-1964369f-94d7-42b7-85f4-258440e1e8ef: the server could not find the requested resource (get pods dns-test-1964369f-94d7-42b7-85f4-258440e1e8ef) Apr 8 21:19:32.990: INFO: Unable to read jessie_udp@dns-test-service.dns-9823.svc.cluster.local from pod dns-9823/dns-test-1964369f-94d7-42b7-85f4-258440e1e8ef: the server could not find the requested resource (get pods dns-test-1964369f-94d7-42b7-85f4-258440e1e8ef) Apr 8 21:19:32.993: INFO: Unable to read jessie_tcp@dns-test-service.dns-9823.svc.cluster.local from pod dns-9823/dns-test-1964369f-94d7-42b7-85f4-258440e1e8ef: the server could not find the requested resource (get pods dns-test-1964369f-94d7-42b7-85f4-258440e1e8ef) Apr 8 21:19:32.996: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9823.svc.cluster.local from pod dns-9823/dns-test-1964369f-94d7-42b7-85f4-258440e1e8ef: the server could not find the requested resource (get pods dns-test-1964369f-94d7-42b7-85f4-258440e1e8ef) Apr 8 21:19:32.999: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9823.svc.cluster.local from pod dns-9823/dns-test-1964369f-94d7-42b7-85f4-258440e1e8ef: the server could not find the requested resource (get pods dns-test-1964369f-94d7-42b7-85f4-258440e1e8ef) Apr 8 21:19:33.018: INFO: Lookups using dns-9823/dns-test-1964369f-94d7-42b7-85f4-258440e1e8ef failed for: [wheezy_udp@dns-test-service.dns-9823.svc.cluster.local wheezy_tcp@dns-test-service.dns-9823.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9823.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9823.svc.cluster.local jessie_udp@dns-test-service.dns-9823.svc.cluster.local jessie_tcp@dns-test-service.dns-9823.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9823.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9823.svc.cluster.local] Apr 8 21:19:38.013: INFO: DNS probes using dns-9823/dns-test-1964369f-94d7-42b7-85f4-258440e1e8ef succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:19:38.448: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-9823" for this suite. • [SLOW TEST:36.897 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":278,"completed":41,"skipped":685,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:19:38.471: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-00e15f3e-ac27-4446-9a89-5ea4a9c731a1 STEP: Creating a pod to test consume configMaps Apr 8 21:19:38.687: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-800e70d4-c5a1-4bbe-8f22-a4f3cabd394a" in namespace "projected-541" to be "success or failure" Apr 8 21:19:38.691: INFO: Pod "pod-projected-configmaps-800e70d4-c5a1-4bbe-8f22-a4f3cabd394a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.190541ms Apr 8 21:19:40.695: INFO: Pod "pod-projected-configmaps-800e70d4-c5a1-4bbe-8f22-a4f3cabd394a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007060226s Apr 8 21:19:42.700: INFO: Pod "pod-projected-configmaps-800e70d4-c5a1-4bbe-8f22-a4f3cabd394a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012079442s STEP: Saw pod success Apr 8 21:19:42.700: INFO: Pod "pod-projected-configmaps-800e70d4-c5a1-4bbe-8f22-a4f3cabd394a" satisfied condition "success or failure" Apr 8 21:19:42.703: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-800e70d4-c5a1-4bbe-8f22-a4f3cabd394a container projected-configmap-volume-test: STEP: delete the pod Apr 8 21:19:42.771: INFO: Waiting for pod pod-projected-configmaps-800e70d4-c5a1-4bbe-8f22-a4f3cabd394a to disappear Apr 8 21:19:42.775: INFO: Pod pod-projected-configmaps-800e70d4-c5a1-4bbe-8f22-a4f3cabd394a no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:19:42.775: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-541" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":42,"skipped":703,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:19:42.795: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:19:53.924: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7989" for this suite. • [SLOW TEST:11.138 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":278,"completed":43,"skipped":733,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:19:53.934: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:20:25.379: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-9050" for this suite. • [SLOW TEST:31.452 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":278,"completed":44,"skipped":742,"failed":0} SSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:20:25.386: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name s-test-opt-del-6d05b92c-83bb-4f80-a806-aa498285a6a2 STEP: Creating secret with name s-test-opt-upd-f7f9f023-1606-4366-994c-16c9b2ce6446 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-6d05b92c-83bb-4f80-a806-aa498285a6a2 STEP: Updating secret s-test-opt-upd-f7f9f023-1606-4366-994c-16c9b2ce6446 STEP: Creating secret with name s-test-opt-create-ac5c2f1d-bd45-429c-a16f-a7214951dec7 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:21:57.960: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-774" for this suite. • [SLOW TEST:92.581 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":45,"skipped":747,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:21:57.967: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 8 21:21:58.032: INFO: Creating deployment "test-recreate-deployment" Apr 8 21:21:58.047: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Apr 8 21:21:58.059: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Apr 8 21:22:00.065: INFO: Waiting deployment "test-recreate-deployment" to complete Apr 8 21:22:00.068: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721977718, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721977718, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721977718, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721977718, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 8 21:22:02.072: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Apr 8 21:22:02.080: INFO: Updating deployment test-recreate-deployment Apr 8 21:22:02.080: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Apr 8 21:22:02.397: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-4756 /apis/apps/v1/namespaces/deployment-4756/deployments/test-recreate-deployment 313a96d8-f56e-465e-b6be-f6cdf6a52e6a 6505219 2 2020-04-08 21:21:58 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003b380c8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-04-08 21:22:02 +0000 UTC,LastTransitionTime:2020-04-08 21:22:02 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-5f94c574ff" is progressing.,LastUpdateTime:2020-04-08 21:22:02 +0000 UTC,LastTransitionTime:2020-04-08 21:21:58 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} Apr 8 21:22:02.469: INFO: New ReplicaSet "test-recreate-deployment-5f94c574ff" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-5f94c574ff deployment-4756 /apis/apps/v1/namespaces/deployment-4756/replicasets/test-recreate-deployment-5f94c574ff ff790e36-7667-4334-abf9-a6bc7d867832 6505218 1 2020-04-08 21:22:02 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 313a96d8-f56e-465e-b6be-f6cdf6a52e6a 0xc003b38457 0xc003b38458}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5f94c574ff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003b384b8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 8 21:22:02.469: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Apr 8 21:22:02.469: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-799c574856 deployment-4756 /apis/apps/v1/namespaces/deployment-4756/replicasets/test-recreate-deployment-799c574856 a7b5d62e-302b-487e-8a7d-67a57b0e4883 6505208 2 2020-04-08 21:21:58 +0000 UTC map[name:sample-pod-3 pod-template-hash:799c574856] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 313a96d8-f56e-465e-b6be-f6cdf6a52e6a 0xc003b38527 0xc003b38528}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 799c574856,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:799c574856] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003b38598 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 8 21:22:02.473: INFO: Pod "test-recreate-deployment-5f94c574ff-s6gxs" is not available: &Pod{ObjectMeta:{test-recreate-deployment-5f94c574ff-s6gxs test-recreate-deployment-5f94c574ff- deployment-4756 /api/v1/namespaces/deployment-4756/pods/test-recreate-deployment-5f94c574ff-s6gxs d570a75c-412c-48c4-b536-e049b5fe389a 6505220 0 2020-04-08 21:22:02 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [{apps/v1 ReplicaSet test-recreate-deployment-5f94c574ff ff790e36-7667-4334-abf9-a6bc7d867832 0xc003b389d7 0xc003b389d8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5bdwb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5bdwb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5bdwb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 21:22:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 21:22:02 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 21:22:02 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 21:22:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-04-08 21:22:02 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:22:02.473: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-4756" for this suite. •{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":46,"skipped":762,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:22:02.481: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 8 21:22:04.183: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 8 21:22:06.310: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721977724, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721977724, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721977724, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721977723, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 8 21:22:09.339: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 8 21:22:09.343: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-9545-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:22:10.468: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3778" for this suite. STEP: Destroying namespace "webhook-3778-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.101 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":278,"completed":47,"skipped":780,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:22:10.583: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 8 21:22:14.683: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:22:14.789: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-5409" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":48,"skipped":801,"failed":0} SSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:22:14.798: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0408 21:22:15.942107 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 8 21:22:15.942: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:22:15.942: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8620" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":278,"completed":49,"skipped":806,"failed":0} S ------------------------------ [sig-cli] Kubectl client Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:22:15.987: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1525 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Apr 8 21:22:16.035: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-5272' Apr 8 21:22:16.145: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Apr 8 21:22:16.145: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n" STEP: verifying the rc e2e-test-httpd-rc was created STEP: verifying the pod controlled by rc e2e-test-httpd-rc was created STEP: confirm that you can get logs from an rc Apr 8 21:22:16.177: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-httpd-rc-tgkcd] Apr 8 21:22:16.177: INFO: Waiting up to 5m0s for pod "e2e-test-httpd-rc-tgkcd" in namespace "kubectl-5272" to be "running and ready" Apr 8 21:22:16.180: INFO: Pod "e2e-test-httpd-rc-tgkcd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.872286ms Apr 8 21:22:18.263: INFO: Pod "e2e-test-httpd-rc-tgkcd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.085700213s Apr 8 21:22:20.267: INFO: Pod "e2e-test-httpd-rc-tgkcd": Phase="Running", Reason="", readiness=true. Elapsed: 4.089840569s Apr 8 21:22:20.267: INFO: Pod "e2e-test-httpd-rc-tgkcd" satisfied condition "running and ready" Apr 8 21:22:20.267: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-httpd-rc-tgkcd] Apr 8 21:22:20.267: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-httpd-rc --namespace=kubectl-5272' Apr 8 21:22:20.403: INFO: stderr: "" Apr 8 21:22:20.403: INFO: stdout: "AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.2.165. Set the 'ServerName' directive globally to suppress this message\nAH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.2.165. Set the 'ServerName' directive globally to suppress this message\n[Wed Apr 08 21:22:18.918381 2020] [mpm_event:notice] [pid 1:tid 140612443663208] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations\n[Wed Apr 08 21:22:18.918425 2020] [core:notice] [pid 1:tid 140612443663208] AH00094: Command line: 'httpd -D FOREGROUND'\n" [AfterEach] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1530 Apr 8 21:22:20.403: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-5272' Apr 8 21:22:20.516: INFO: stderr: "" Apr 8 21:22:20.516: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:22:20.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5272" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run rc should create an rc from an image [Conformance]","total":278,"completed":50,"skipped":807,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:22:20.524: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1626 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Apr 8 21:22:20.574: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --generator=deployment/apps.v1 --namespace=kubectl-4409' Apr 8 21:22:20.684: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Apr 8 21:22:20.684: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n" STEP: verifying the deployment e2e-test-httpd-deployment was created STEP: verifying the pod controlled by deployment e2e-test-httpd-deployment was created [AfterEach] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1631 Apr 8 21:22:22.828: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-4409' Apr 8 21:22:22.938: INFO: stderr: "" Apr 8 21:22:22.938: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:22:22.938: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4409" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image [Conformance]","total":278,"completed":51,"skipped":828,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:22:22.964: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:22:27.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-1004" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":278,"completed":52,"skipped":861,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:22:27.169: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-configmap-gt7b STEP: Creating a pod to test atomic-volume-subpath Apr 8 21:22:27.464: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-gt7b" in namespace "subpath-1521" to be "success or failure" Apr 8 21:22:27.468: INFO: Pod "pod-subpath-test-configmap-gt7b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.239851ms Apr 8 21:22:29.473: INFO: Pod "pod-subpath-test-configmap-gt7b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008681849s Apr 8 21:22:31.477: INFO: Pod "pod-subpath-test-configmap-gt7b": Phase="Running", Reason="", readiness=true. Elapsed: 4.012684276s Apr 8 21:22:33.481: INFO: Pod "pod-subpath-test-configmap-gt7b": Phase="Running", Reason="", readiness=true. Elapsed: 6.016808454s Apr 8 21:22:35.485: INFO: Pod "pod-subpath-test-configmap-gt7b": Phase="Running", Reason="", readiness=true. Elapsed: 8.020152858s Apr 8 21:22:37.488: INFO: Pod "pod-subpath-test-configmap-gt7b": Phase="Running", Reason="", readiness=true. Elapsed: 10.023908085s Apr 8 21:22:39.493: INFO: Pod "pod-subpath-test-configmap-gt7b": Phase="Running", Reason="", readiness=true. Elapsed: 12.028367065s Apr 8 21:22:41.497: INFO: Pod "pod-subpath-test-configmap-gt7b": Phase="Running", Reason="", readiness=true. Elapsed: 14.032999441s Apr 8 21:22:43.501: INFO: Pod "pod-subpath-test-configmap-gt7b": Phase="Running", Reason="", readiness=true. Elapsed: 16.037107079s Apr 8 21:22:45.505: INFO: Pod "pod-subpath-test-configmap-gt7b": Phase="Running", Reason="", readiness=true. Elapsed: 18.0407701s Apr 8 21:22:47.509: INFO: Pod "pod-subpath-test-configmap-gt7b": Phase="Running", Reason="", readiness=true. Elapsed: 20.044643785s Apr 8 21:22:49.513: INFO: Pod "pod-subpath-test-configmap-gt7b": Phase="Running", Reason="", readiness=true. Elapsed: 22.049078415s Apr 8 21:22:51.518: INFO: Pod "pod-subpath-test-configmap-gt7b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.053524157s STEP: Saw pod success Apr 8 21:22:51.518: INFO: Pod "pod-subpath-test-configmap-gt7b" satisfied condition "success or failure" Apr 8 21:22:51.521: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-configmap-gt7b container test-container-subpath-configmap-gt7b: STEP: delete the pod Apr 8 21:22:51.550: INFO: Waiting for pod pod-subpath-test-configmap-gt7b to disappear Apr 8 21:22:51.577: INFO: Pod pod-subpath-test-configmap-gt7b no longer exists STEP: Deleting pod pod-subpath-test-configmap-gt7b Apr 8 21:22:51.578: INFO: Deleting pod "pod-subpath-test-configmap-gt7b" in namespace "subpath-1521" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:22:51.580: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-1521" for this suite. • [SLOW TEST:24.418 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":278,"completed":53,"skipped":875,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:22:51.589: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 8 21:22:51.664: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1dc4b372-07ea-4d4c-a886-15314cdbb5f6" in namespace "projected-484" to be "success or failure" Apr 8 21:22:51.675: INFO: Pod "downwardapi-volume-1dc4b372-07ea-4d4c-a886-15314cdbb5f6": Phase="Pending", Reason="", readiness=false. Elapsed: 11.261768ms Apr 8 21:22:53.679: INFO: Pod "downwardapi-volume-1dc4b372-07ea-4d4c-a886-15314cdbb5f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015540668s Apr 8 21:22:55.683: INFO: Pod "downwardapi-volume-1dc4b372-07ea-4d4c-a886-15314cdbb5f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019578616s STEP: Saw pod success Apr 8 21:22:55.683: INFO: Pod "downwardapi-volume-1dc4b372-07ea-4d4c-a886-15314cdbb5f6" satisfied condition "success or failure" Apr 8 21:22:55.686: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-1dc4b372-07ea-4d4c-a886-15314cdbb5f6 container client-container: STEP: delete the pod Apr 8 21:22:55.756: INFO: Waiting for pod downwardapi-volume-1dc4b372-07ea-4d4c-a886-15314cdbb5f6 to disappear Apr 8 21:22:55.761: INFO: Pod downwardapi-volume-1dc4b372-07ea-4d4c-a886-15314cdbb5f6 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:22:55.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-484" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":54,"skipped":916,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:22:55.768: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Apr 8 21:22:56.558: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Apr 8 21:22:58.569: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721977776, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721977776, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721977776, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721977776, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 8 21:23:01.602: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 8 21:23:01.606: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:23:03.017: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-2941" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136 • [SLOW TEST:7.310 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":278,"completed":55,"skipped":924,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:23:03.078: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap that has name configmap-test-emptyKey-539cee85-15c4-45fd-8824-b6b87e078b10 [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:23:03.169: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8332" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":278,"completed":56,"skipped":939,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:23:03.189: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-8709 [It] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating statefulset ss in namespace statefulset-8709 Apr 8 21:23:03.395: INFO: Found 0 stateful pods, waiting for 1 Apr 8 21:23:13.399: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Apr 8 21:23:13.420: INFO: Deleting all statefulset in ns statefulset-8709 Apr 8 21:23:13.455: INFO: Scaling statefulset ss to 0 Apr 8 21:23:33.492: INFO: Waiting for statefulset status.replicas updated to 0 Apr 8 21:23:33.495: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:23:33.510: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-8709" for this suite. • [SLOW TEST:30.327 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":278,"completed":57,"skipped":963,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:23:33.517: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Apr 8 21:23:33.594: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:23:49.492: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7408" for this suite. • [SLOW TEST:15.983 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":278,"completed":58,"skipped":990,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:23:49.500: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 8 21:23:50.211: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 8 21:23:52.221: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721977830, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721977830, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721977830, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721977830, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 8 21:23:55.253: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:24:07.477: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1857" for this suite. STEP: Destroying namespace "webhook-1857-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:18.072 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":278,"completed":59,"skipped":1009,"failed":0} S ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:24:07.572: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 8 21:24:08.329: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 8 21:24:10.340: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721977848, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721977848, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721977848, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721977848, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 8 21:24:13.419: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:24:13.519: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3280" for this suite. STEP: Destroying namespace "webhook-3280-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.049 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":278,"completed":60,"skipped":1010,"failed":0} SSSSS ------------------------------ [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:24:13.622: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1790 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Apr 8 21:24:13.660: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-8161' Apr 8 21:24:13.771: INFO: stderr: "" Apr 8 21:24:13.771: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created Apr 8 21:24:18.822: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-8161 -o json' Apr 8 21:24:21.428: INFO: stderr: "" Apr 8 21:24:21.428: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-04-08T21:24:13Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-8161\",\n \"resourceVersion\": \"6506256\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-8161/pods/e2e-test-httpd-pod\",\n \"uid\": \"52ca0a65-5ed8-498e-ba12-3b3c174300ae\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-8wzrp\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"jerma-worker\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-8wzrp\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-8wzrp\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-04-08T21:24:13Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-04-08T21:24:17Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-04-08T21:24:17Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-04-08T21:24:13Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://f859931cdf9c91b054173ec22f0d745400977c5136bccb818f82445f023ddf8d\",\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-04-08T21:24:16Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.10\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.1.104\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.1.104\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-04-08T21:24:13Z\"\n }\n}\n" STEP: replace the image in the pod Apr 8 21:24:21.428: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-8161' Apr 8 21:24:21.714: INFO: stderr: "" Apr 8 21:24:21.714: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29 [AfterEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1795 Apr 8 21:24:21.716: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-8161' Apr 8 21:24:29.279: INFO: stderr: "" Apr 8 21:24:29.279: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:24:29.279: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8161" for this suite. • [SLOW TEST:15.664 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1786 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":278,"completed":61,"skipped":1015,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:24:29.286: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:24:46.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9695" for this suite. • [SLOW TEST:17.110 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":278,"completed":62,"skipped":1045,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:24:46.397: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-4800 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a new StatefulSet Apr 8 21:24:46.478: INFO: Found 0 stateful pods, waiting for 3 Apr 8 21:24:56.486: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 8 21:24:56.486: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 8 21:24:56.486: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=false Apr 8 21:25:06.483: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 8 21:25:06.483: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 8 21:25:06.483: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Apr 8 21:25:06.511: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Apr 8 21:25:16.552: INFO: Updating stateful set ss2 Apr 8 21:25:16.591: INFO: Waiting for Pod statefulset-4800/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Restoring Pods to the correct revision when they are deleted Apr 8 21:25:26.787: INFO: Found 2 stateful pods, waiting for 3 Apr 8 21:25:36.792: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 8 21:25:36.792: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 8 21:25:36.792: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Apr 8 21:25:36.816: INFO: Updating stateful set ss2 Apr 8 21:25:36.876: INFO: Waiting for Pod statefulset-4800/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Apr 8 21:25:46.904: INFO: Updating stateful set ss2 Apr 8 21:25:46.915: INFO: Waiting for StatefulSet statefulset-4800/ss2 to complete update Apr 8 21:25:46.915: INFO: Waiting for Pod statefulset-4800/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Apr 8 21:25:56.923: INFO: Deleting all statefulset in ns statefulset-4800 Apr 8 21:25:56.926: INFO: Scaling statefulset ss2 to 0 Apr 8 21:26:26.944: INFO: Waiting for statefulset status.replicas updated to 0 Apr 8 21:26:26.948: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:26:26.959: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-4800" for this suite. • [SLOW TEST:100.570 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":278,"completed":63,"skipped":1061,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:26:26.967: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-ed92e07b-e599-45cc-aab5-62c846644968 STEP: Creating a pod to test consume secrets Apr 8 21:26:27.067: INFO: Waiting up to 5m0s for pod "pod-secrets-d2515428-3545-4c37-8a60-585e65e4a1ff" in namespace "secrets-1644" to be "success or failure" Apr 8 21:26:27.073: INFO: Pod "pod-secrets-d2515428-3545-4c37-8a60-585e65e4a1ff": Phase="Pending", Reason="", readiness=false. Elapsed: 5.437739ms Apr 8 21:26:29.093: INFO: Pod "pod-secrets-d2515428-3545-4c37-8a60-585e65e4a1ff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025304867s Apr 8 21:26:31.097: INFO: Pod "pod-secrets-d2515428-3545-4c37-8a60-585e65e4a1ff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029627496s STEP: Saw pod success Apr 8 21:26:31.097: INFO: Pod "pod-secrets-d2515428-3545-4c37-8a60-585e65e4a1ff" satisfied condition "success or failure" Apr 8 21:26:31.100: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-d2515428-3545-4c37-8a60-585e65e4a1ff container secret-volume-test: STEP: delete the pod Apr 8 21:26:31.150: INFO: Waiting for pod pod-secrets-d2515428-3545-4c37-8a60-585e65e4a1ff to disappear Apr 8 21:26:31.161: INFO: Pod pod-secrets-d2515428-3545-4c37-8a60-585e65e4a1ff no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:26:31.162: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1644" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":64,"skipped":1081,"failed":0} SS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:26:31.173: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 8 21:26:57.266: INFO: Container started at 2020-04-08 21:26:33 +0000 UTC, pod became ready at 2020-04-08 21:26:55 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:26:57.266: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5189" for this suite. • [SLOW TEST:26.098 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":278,"completed":65,"skipped":1083,"failed":0} SSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:26:57.271: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Apr 8 21:26:57.331: INFO: Waiting up to 5m0s for pod "downward-api-da718ecc-699e-4387-9e8a-48127578faff" in namespace "downward-api-703" to be "success or failure" Apr 8 21:26:57.380: INFO: Pod "downward-api-da718ecc-699e-4387-9e8a-48127578faff": Phase="Pending", Reason="", readiness=false. Elapsed: 49.299271ms Apr 8 21:26:59.385: INFO: Pod "downward-api-da718ecc-699e-4387-9e8a-48127578faff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053892993s Apr 8 21:27:01.389: INFO: Pod "downward-api-da718ecc-699e-4387-9e8a-48127578faff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.05825952s STEP: Saw pod success Apr 8 21:27:01.389: INFO: Pod "downward-api-da718ecc-699e-4387-9e8a-48127578faff" satisfied condition "success or failure" Apr 8 21:27:01.392: INFO: Trying to get logs from node jerma-worker2 pod downward-api-da718ecc-699e-4387-9e8a-48127578faff container dapi-container: STEP: delete the pod Apr 8 21:27:01.455: INFO: Waiting for pod downward-api-da718ecc-699e-4387-9e8a-48127578faff to disappear Apr 8 21:27:01.460: INFO: Pod downward-api-da718ecc-699e-4387-9e8a-48127578faff no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:27:01.460: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-703" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":278,"completed":66,"skipped":1086,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:27:01.467: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:27:08.538: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2721" for this suite. • [SLOW TEST:7.079 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":278,"completed":67,"skipped":1147,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:27:08.547: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Apr 8 21:27:08.637: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-7407 /api/v1/namespaces/watch-7407/configmaps/e2e-watch-test-resource-version 4e957164-4e88-40f7-9006-5bd3e403639c 6507170 0 2020-04-08 21:27:08 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Apr 8 21:27:08.638: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-7407 /api/v1/namespaces/watch-7407/configmaps/e2e-watch-test-resource-version 4e957164-4e88-40f7-9006-5bd3e403639c 6507171 0 2020-04-08 21:27:08 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:27:08.638: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-7407" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":278,"completed":68,"skipped":1153,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:27:08.646: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 8 21:27:08.690: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-409' Apr 8 21:27:08.987: INFO: stderr: "" Apr 8 21:27:08.987: INFO: stdout: "replicationcontroller/agnhost-master created\n" Apr 8 21:27:08.987: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-409' Apr 8 21:27:09.251: INFO: stderr: "" Apr 8 21:27:09.251: INFO: stdout: "service/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Apr 8 21:27:10.256: INFO: Selector matched 1 pods for map[app:agnhost] Apr 8 21:27:10.256: INFO: Found 0 / 1 Apr 8 21:27:11.254: INFO: Selector matched 1 pods for map[app:agnhost] Apr 8 21:27:11.255: INFO: Found 0 / 1 Apr 8 21:27:12.256: INFO: Selector matched 1 pods for map[app:agnhost] Apr 8 21:27:12.256: INFO: Found 1 / 1 Apr 8 21:27:12.256: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Apr 8 21:27:12.260: INFO: Selector matched 1 pods for map[app:agnhost] Apr 8 21:27:12.260: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Apr 8 21:27:12.260: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod agnhost-master-2jnlb --namespace=kubectl-409' Apr 8 21:27:12.377: INFO: stderr: "" Apr 8 21:27:12.377: INFO: stdout: "Name: agnhost-master-2jnlb\nNamespace: kubectl-409\nPriority: 0\nNode: jerma-worker/172.17.0.10\nStart Time: Wed, 08 Apr 2020 21:27:09 +0000\nLabels: app=agnhost\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.1.110\nIPs:\n IP: 10.244.1.110\nControlled By: ReplicationController/agnhost-master\nContainers:\n agnhost-master:\n Container ID: containerd://d4818a5dcf6c50058090c0cf9e3d3e46f5727e1dd0513dfcaf32c15abf177e2b\n Image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n Image ID: gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Wed, 08 Apr 2020 21:27:11 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-t48rn (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-t48rn:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-t48rn\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 3s default-scheduler Successfully assigned kubectl-409/agnhost-master-2jnlb to jerma-worker\n Normal Pulled 2s kubelet, jerma-worker Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\n Normal Created 1s kubelet, jerma-worker Created container agnhost-master\n Normal Started 1s kubelet, jerma-worker Started container agnhost-master\n" Apr 8 21:27:12.377: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-409' Apr 8 21:27:12.508: INFO: stderr: "" Apr 8 21:27:12.508: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-409\nSelector: app=agnhost,role=master\nLabels: app=agnhost\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=master\n Containers:\n agnhost-master:\n Image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 4s replication-controller Created pod: agnhost-master-2jnlb\n" Apr 8 21:27:12.508: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-409' Apr 8 21:27:12.611: INFO: stderr: "" Apr 8 21:27:12.611: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-409\nLabels: app=agnhost\n role=master\nAnnotations: \nSelector: app=agnhost,role=master\nType: ClusterIP\nIP: 10.107.52.51\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.244.1.110:6379\nSession Affinity: None\nEvents: \n" Apr 8 21:27:12.615: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node jerma-control-plane' Apr 8 21:27:12.754: INFO: stderr: "" Apr 8 21:27:12.754: INFO: stdout: "Name: jerma-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=jerma-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 15 Mar 2020 18:25:55 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: jerma-control-plane\n AcquireTime: \n RenewTime: Wed, 08 Apr 2020 21:27:05 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Wed, 08 Apr 2020 21:26:50 +0000 Sun, 15 Mar 2020 18:25:55 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Wed, 08 Apr 2020 21:26:50 +0000 Sun, 15 Mar 2020 18:25:55 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Wed, 08 Apr 2020 21:26:50 +0000 Sun, 15 Mar 2020 18:25:55 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Wed, 08 Apr 2020 21:26:50 +0000 Sun, 15 Mar 2020 18:26:27 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.9\n Hostname: jerma-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nSystem Info:\n Machine ID: 3bcfb16fe77247d3af07bed975350d5c\n System UUID: 947a2db5-5527-4203-8af5-13d97ffe8a80\n Boot ID: ca2aa731-f890-4956-92a1-ff8c7560d571\n Kernel Version: 4.15.0-88-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.2-31-gaa877d78\n Kubelet Version: v1.17.2\n Kube-Proxy Version: v1.17.2\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nNon-terminated Pods: (9 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system coredns-6955765f44-rll5s 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 24d\n kube-system coredns-6955765f44-svxk5 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 24d\n kube-system etcd-jerma-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 24d\n kube-system kindnet-bjddj 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 24d\n kube-system kube-apiserver-jerma-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 24d\n kube-system kube-controller-manager-jerma-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 24d\n kube-system kube-proxy-mm9zd 0 (0%) 0 (0%) 0 (0%) 0 (0%) 24d\n kube-system kube-scheduler-jerma-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 24d\n local-path-storage local-path-provisioner-85445b74d4-7mg5w 0 (0%) 0 (0%) 0 (0%) 0 (0%) 24d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 850m (5%) 100m (0%)\n memory 190Mi (0%) 390Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" Apr 8 21:27:12.754: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-409' Apr 8 21:27:12.870: INFO: stderr: "" Apr 8 21:27:12.870: INFO: stdout: "Name: kubectl-409\nLabels: e2e-framework=kubectl\n e2e-run=feca5b2b-07c1-4797-b0cb-b51a89d18742\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:27:12.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-409" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":278,"completed":69,"skipped":1230,"failed":0} SSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:27:12.877: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: executing a command with run --rm and attach with stdin Apr 8 21:27:12.942: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4772 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' Apr 8 21:27:15.722: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0408 21:27:15.657405 1614 log.go:172] (0xc0000284d0) (0xc0006b2140) Create stream\nI0408 21:27:15.657457 1614 log.go:172] (0xc0000284d0) (0xc0006b2140) Stream added, broadcasting: 1\nI0408 21:27:15.659529 1614 log.go:172] (0xc0000284d0) Reply frame received for 1\nI0408 21:27:15.659563 1614 log.go:172] (0xc0000284d0) (0xc000742000) Create stream\nI0408 21:27:15.659572 1614 log.go:172] (0xc0000284d0) (0xc000742000) Stream added, broadcasting: 3\nI0408 21:27:15.660385 1614 log.go:172] (0xc0000284d0) Reply frame received for 3\nI0408 21:27:15.660412 1614 log.go:172] (0xc0000284d0) (0xc0006b21e0) Create stream\nI0408 21:27:15.660421 1614 log.go:172] (0xc0000284d0) (0xc0006b21e0) Stream added, broadcasting: 5\nI0408 21:27:15.661246 1614 log.go:172] (0xc0000284d0) Reply frame received for 5\nI0408 21:27:15.661276 1614 log.go:172] (0xc0000284d0) (0xc0006b2280) Create stream\nI0408 21:27:15.661285 1614 log.go:172] (0xc0000284d0) (0xc0006b2280) Stream added, broadcasting: 7\nI0408 21:27:15.662347 1614 log.go:172] (0xc0000284d0) Reply frame received for 7\nI0408 21:27:15.662463 1614 log.go:172] (0xc000742000) (3) Writing data frame\nI0408 21:27:15.662553 1614 log.go:172] (0xc000742000) (3) Writing data frame\nI0408 21:27:15.663479 1614 log.go:172] (0xc0000284d0) Data frame received for 5\nI0408 21:27:15.663500 1614 log.go:172] (0xc0006b21e0) (5) Data frame handling\nI0408 21:27:15.663516 1614 log.go:172] (0xc0006b21e0) (5) Data frame sent\nI0408 21:27:15.664046 1614 log.go:172] (0xc0000284d0) Data frame received for 5\nI0408 21:27:15.664060 1614 log.go:172] (0xc0006b21e0) (5) Data frame handling\nI0408 21:27:15.664073 1614 log.go:172] (0xc0006b21e0) (5) Data frame sent\nI0408 21:27:15.699782 1614 log.go:172] (0xc0000284d0) Data frame received for 7\nI0408 21:27:15.699831 1614 log.go:172] (0xc0006b2280) (7) Data frame handling\nI0408 21:27:15.700178 1614 log.go:172] (0xc0000284d0) Data frame received for 5\nI0408 21:27:15.700211 1614 log.go:172] (0xc0006b21e0) (5) Data frame handling\nI0408 21:27:15.700530 1614 log.go:172] (0xc0000284d0) Data frame received for 1\nI0408 21:27:15.700598 1614 log.go:172] (0xc0000284d0) (0xc000742000) Stream removed, broadcasting: 3\nI0408 21:27:15.700652 1614 log.go:172] (0xc0006b2140) (1) Data frame handling\nI0408 21:27:15.700682 1614 log.go:172] (0xc0006b2140) (1) Data frame sent\nI0408 21:27:15.700714 1614 log.go:172] (0xc0000284d0) (0xc0006b2140) Stream removed, broadcasting: 1\nI0408 21:27:15.700815 1614 log.go:172] (0xc0000284d0) Go away received\nI0408 21:27:15.701456 1614 log.go:172] (0xc0000284d0) (0xc0006b2140) Stream removed, broadcasting: 1\nI0408 21:27:15.701479 1614 log.go:172] (0xc0000284d0) (0xc000742000) Stream removed, broadcasting: 3\nI0408 21:27:15.701489 1614 log.go:172] (0xc0000284d0) (0xc0006b21e0) Stream removed, broadcasting: 5\nI0408 21:27:15.701499 1614 log.go:172] (0xc0000284d0) (0xc0006b2280) Stream removed, broadcasting: 7\n" Apr 8 21:27:15.722: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:27:17.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4772" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job [Conformance]","total":278,"completed":70,"skipped":1238,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:27:17.742: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Apr 8 21:27:17.869: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 8 21:27:17.885: INFO: Number of nodes with available pods: 0 Apr 8 21:27:17.885: INFO: Node jerma-worker is running more than one daemon pod Apr 8 21:27:18.890: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 8 21:27:18.893: INFO: Number of nodes with available pods: 0 Apr 8 21:27:18.893: INFO: Node jerma-worker is running more than one daemon pod Apr 8 21:27:19.890: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 8 21:27:19.894: INFO: Number of nodes with available pods: 0 Apr 8 21:27:19.894: INFO: Node jerma-worker is running more than one daemon pod Apr 8 21:27:20.889: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 8 21:27:20.892: INFO: Number of nodes with available pods: 1 Apr 8 21:27:20.892: INFO: Node jerma-worker is running more than one daemon pod Apr 8 21:27:21.890: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 8 21:27:21.892: INFO: Number of nodes with available pods: 2 Apr 8 21:27:21.892: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Apr 8 21:27:21.924: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 8 21:27:21.929: INFO: Number of nodes with available pods: 2 Apr 8 21:27:21.929: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1985, will wait for the garbage collector to delete the pods Apr 8 21:27:23.354: INFO: Deleting DaemonSet.extensions daemon-set took: 215.847285ms Apr 8 21:27:23.455: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.211875ms Apr 8 21:27:29.566: INFO: Number of nodes with available pods: 0 Apr 8 21:27:29.566: INFO: Number of running nodes: 0, number of available pods: 0 Apr 8 21:27:29.569: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1985/daemonsets","resourceVersion":"6507379"},"items":null} Apr 8 21:27:29.571: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1985/pods","resourceVersion":"6507379"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:27:29.579: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-1985" for this suite. • [SLOW TEST:11.844 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":278,"completed":71,"skipped":1251,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:27:29.587: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-9147b83f-40d0-4132-8bf2-8760bd04b14c STEP: Creating a pod to test consume secrets Apr 8 21:27:29.643: INFO: Waiting up to 5m0s for pod "pod-secrets-1da4661f-63d8-4e5d-988c-b2473cd53ce2" in namespace "secrets-7930" to be "success or failure" Apr 8 21:27:29.660: INFO: Pod "pod-secrets-1da4661f-63d8-4e5d-988c-b2473cd53ce2": Phase="Pending", Reason="", readiness=false. Elapsed: 17.377231ms Apr 8 21:27:31.665: INFO: Pod "pod-secrets-1da4661f-63d8-4e5d-988c-b2473cd53ce2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021937585s Apr 8 21:27:33.669: INFO: Pod "pod-secrets-1da4661f-63d8-4e5d-988c-b2473cd53ce2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025901035s STEP: Saw pod success Apr 8 21:27:33.669: INFO: Pod "pod-secrets-1da4661f-63d8-4e5d-988c-b2473cd53ce2" satisfied condition "success or failure" Apr 8 21:27:33.672: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-1da4661f-63d8-4e5d-988c-b2473cd53ce2 container secret-volume-test: STEP: delete the pod Apr 8 21:27:33.704: INFO: Waiting for pod pod-secrets-1da4661f-63d8-4e5d-988c-b2473cd53ce2 to disappear Apr 8 21:27:33.707: INFO: Pod pod-secrets-1da4661f-63d8-4e5d-988c-b2473cd53ce2 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:27:33.707: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7930" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":72,"skipped":1273,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:27:33.715: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service externalname-service with the type=ExternalName in namespace services-3263 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-3263 I0408 21:27:33.827746 6 runners.go:189] Created replication controller with name: externalname-service, namespace: services-3263, replica count: 2 I0408 21:27:36.878142 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0408 21:27:39.878351 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 8 21:27:39.878: INFO: Creating new exec pod Apr 8 21:27:44.929: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3263 execpod5dgf8 -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Apr 8 21:27:45.149: INFO: stderr: "I0408 21:27:45.073577 1636 log.go:172] (0xc00020ed10) (0xc0006f5a40) Create stream\nI0408 21:27:45.073660 1636 log.go:172] (0xc00020ed10) (0xc0006f5a40) Stream added, broadcasting: 1\nI0408 21:27:45.076117 1636 log.go:172] (0xc00020ed10) Reply frame received for 1\nI0408 21:27:45.076147 1636 log.go:172] (0xc00020ed10) (0xc000ace000) Create stream\nI0408 21:27:45.076160 1636 log.go:172] (0xc00020ed10) (0xc000ace000) Stream added, broadcasting: 3\nI0408 21:27:45.077271 1636 log.go:172] (0xc00020ed10) Reply frame received for 3\nI0408 21:27:45.077341 1636 log.go:172] (0xc00020ed10) (0xc000384000) Create stream\nI0408 21:27:45.077366 1636 log.go:172] (0xc00020ed10) (0xc000384000) Stream added, broadcasting: 5\nI0408 21:27:45.078436 1636 log.go:172] (0xc00020ed10) Reply frame received for 5\nI0408 21:27:45.140666 1636 log.go:172] (0xc00020ed10) Data frame received for 5\nI0408 21:27:45.140705 1636 log.go:172] (0xc000384000) (5) Data frame handling\nI0408 21:27:45.140736 1636 log.go:172] (0xc000384000) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0408 21:27:45.141395 1636 log.go:172] (0xc00020ed10) Data frame received for 5\nI0408 21:27:45.141443 1636 log.go:172] (0xc000384000) (5) Data frame handling\nI0408 21:27:45.141459 1636 log.go:172] (0xc000384000) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0408 21:27:45.141688 1636 log.go:172] (0xc00020ed10) Data frame received for 5\nI0408 21:27:45.141720 1636 log.go:172] (0xc000384000) (5) Data frame handling\nI0408 21:27:45.141928 1636 log.go:172] (0xc00020ed10) Data frame received for 3\nI0408 21:27:45.141956 1636 log.go:172] (0xc000ace000) (3) Data frame handling\nI0408 21:27:45.143730 1636 log.go:172] (0xc00020ed10) Data frame received for 1\nI0408 21:27:45.143757 1636 log.go:172] (0xc0006f5a40) (1) Data frame handling\nI0408 21:27:45.143778 1636 log.go:172] (0xc0006f5a40) (1) Data frame sent\nI0408 21:27:45.143793 1636 log.go:172] (0xc00020ed10) (0xc0006f5a40) Stream removed, broadcasting: 1\nI0408 21:27:45.143914 1636 log.go:172] (0xc00020ed10) Go away received\nI0408 21:27:45.144186 1636 log.go:172] (0xc00020ed10) (0xc0006f5a40) Stream removed, broadcasting: 1\nI0408 21:27:45.144207 1636 log.go:172] (0xc00020ed10) (0xc000ace000) Stream removed, broadcasting: 3\nI0408 21:27:45.144219 1636 log.go:172] (0xc00020ed10) (0xc000384000) Stream removed, broadcasting: 5\n" Apr 8 21:27:45.149: INFO: stdout: "" Apr 8 21:27:45.150: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3263 execpod5dgf8 -- /bin/sh -x -c nc -zv -t -w 2 10.100.35.218 80' Apr 8 21:27:45.365: INFO: stderr: "I0408 21:27:45.287114 1658 log.go:172] (0xc00092c630) (0xc0008f2000) Create stream\nI0408 21:27:45.287160 1658 log.go:172] (0xc00092c630) (0xc0008f2000) Stream added, broadcasting: 1\nI0408 21:27:45.289703 1658 log.go:172] (0xc00092c630) Reply frame received for 1\nI0408 21:27:45.289747 1658 log.go:172] (0xc00092c630) (0xc000a12000) Create stream\nI0408 21:27:45.289761 1658 log.go:172] (0xc00092c630) (0xc000a12000) Stream added, broadcasting: 3\nI0408 21:27:45.290811 1658 log.go:172] (0xc00092c630) Reply frame received for 3\nI0408 21:27:45.290848 1658 log.go:172] (0xc00092c630) (0xc0008f20a0) Create stream\nI0408 21:27:45.290859 1658 log.go:172] (0xc00092c630) (0xc0008f20a0) Stream added, broadcasting: 5\nI0408 21:27:45.291788 1658 log.go:172] (0xc00092c630) Reply frame received for 5\nI0408 21:27:45.360168 1658 log.go:172] (0xc00092c630) Data frame received for 5\nI0408 21:27:45.360196 1658 log.go:172] (0xc0008f20a0) (5) Data frame handling\nI0408 21:27:45.360204 1658 log.go:172] (0xc0008f20a0) (5) Data frame sent\nI0408 21:27:45.360209 1658 log.go:172] (0xc00092c630) Data frame received for 5\nI0408 21:27:45.360213 1658 log.go:172] (0xc0008f20a0) (5) Data frame handling\n+ nc -zv -t -w 2 10.100.35.218 80\nConnection to 10.100.35.218 80 port [tcp/http] succeeded!\nI0408 21:27:45.360237 1658 log.go:172] (0xc00092c630) Data frame received for 3\nI0408 21:27:45.360243 1658 log.go:172] (0xc000a12000) (3) Data frame handling\nI0408 21:27:45.361405 1658 log.go:172] (0xc00092c630) Data frame received for 1\nI0408 21:27:45.361417 1658 log.go:172] (0xc0008f2000) (1) Data frame handling\nI0408 21:27:45.361425 1658 log.go:172] (0xc0008f2000) (1) Data frame sent\nI0408 21:27:45.361435 1658 log.go:172] (0xc00092c630) (0xc0008f2000) Stream removed, broadcasting: 1\nI0408 21:27:45.361446 1658 log.go:172] (0xc00092c630) Go away received\nI0408 21:27:45.361749 1658 log.go:172] (0xc00092c630) (0xc0008f2000) Stream removed, broadcasting: 1\nI0408 21:27:45.361777 1658 log.go:172] (0xc00092c630) (0xc000a12000) Stream removed, broadcasting: 3\nI0408 21:27:45.361784 1658 log.go:172] (0xc00092c630) (0xc0008f20a0) Stream removed, broadcasting: 5\n" Apr 8 21:27:45.365: INFO: stdout: "" Apr 8 21:27:45.365: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3263 execpod5dgf8 -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.10 30605' Apr 8 21:27:45.547: INFO: stderr: "I0408 21:27:45.486791 1681 log.go:172] (0xc000978000) (0xc000954000) Create stream\nI0408 21:27:45.486877 1681 log.go:172] (0xc000978000) (0xc000954000) Stream added, broadcasting: 1\nI0408 21:27:45.489739 1681 log.go:172] (0xc000978000) Reply frame received for 1\nI0408 21:27:45.489797 1681 log.go:172] (0xc000978000) (0xc00092c000) Create stream\nI0408 21:27:45.489810 1681 log.go:172] (0xc000978000) (0xc00092c000) Stream added, broadcasting: 3\nI0408 21:27:45.490867 1681 log.go:172] (0xc000978000) Reply frame received for 3\nI0408 21:27:45.490910 1681 log.go:172] (0xc000978000) (0xc0008a6460) Create stream\nI0408 21:27:45.490937 1681 log.go:172] (0xc000978000) (0xc0008a6460) Stream added, broadcasting: 5\nI0408 21:27:45.492278 1681 log.go:172] (0xc000978000) Reply frame received for 5\nI0408 21:27:45.541099 1681 log.go:172] (0xc000978000) Data frame received for 3\nI0408 21:27:45.541261 1681 log.go:172] (0xc00092c000) (3) Data frame handling\nI0408 21:27:45.541296 1681 log.go:172] (0xc000978000) Data frame received for 5\nI0408 21:27:45.541321 1681 log.go:172] (0xc0008a6460) (5) Data frame handling\nI0408 21:27:45.541334 1681 log.go:172] (0xc0008a6460) (5) Data frame sent\nI0408 21:27:45.541343 1681 log.go:172] (0xc000978000) Data frame received for 5\nI0408 21:27:45.541350 1681 log.go:172] (0xc0008a6460) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.10 30605\nConnection to 172.17.0.10 30605 port [tcp/30605] succeeded!\nI0408 21:27:45.542632 1681 log.go:172] (0xc000978000) Data frame received for 1\nI0408 21:27:45.542678 1681 log.go:172] (0xc000954000) (1) Data frame handling\nI0408 21:27:45.542723 1681 log.go:172] (0xc000954000) (1) Data frame sent\nI0408 21:27:45.542753 1681 log.go:172] (0xc000978000) (0xc000954000) Stream removed, broadcasting: 1\nI0408 21:27:45.542796 1681 log.go:172] (0xc000978000) Go away received\nI0408 21:27:45.543145 1681 log.go:172] (0xc000978000) (0xc000954000) Stream removed, broadcasting: 1\nI0408 21:27:45.543174 1681 log.go:172] (0xc000978000) (0xc00092c000) Stream removed, broadcasting: 3\nI0408 21:27:45.543185 1681 log.go:172] (0xc000978000) (0xc0008a6460) Stream removed, broadcasting: 5\n" Apr 8 21:27:45.547: INFO: stdout: "" Apr 8 21:27:45.547: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3263 execpod5dgf8 -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.8 30605' Apr 8 21:27:45.750: INFO: stderr: "I0408 21:27:45.672856 1701 log.go:172] (0xc000105290) (0xc000717720) Create stream\nI0408 21:27:45.672926 1701 log.go:172] (0xc000105290) (0xc000717720) Stream added, broadcasting: 1\nI0408 21:27:45.675270 1701 log.go:172] (0xc000105290) Reply frame received for 1\nI0408 21:27:45.675307 1701 log.go:172] (0xc000105290) (0xc0009be000) Create stream\nI0408 21:27:45.675317 1701 log.go:172] (0xc000105290) (0xc0009be000) Stream added, broadcasting: 3\nI0408 21:27:45.676221 1701 log.go:172] (0xc000105290) Reply frame received for 3\nI0408 21:27:45.676265 1701 log.go:172] (0xc000105290) (0xc0007177c0) Create stream\nI0408 21:27:45.676277 1701 log.go:172] (0xc000105290) (0xc0007177c0) Stream added, broadcasting: 5\nI0408 21:27:45.677245 1701 log.go:172] (0xc000105290) Reply frame received for 5\nI0408 21:27:45.744238 1701 log.go:172] (0xc000105290) Data frame received for 3\nI0408 21:27:45.744301 1701 log.go:172] (0xc000105290) Data frame received for 5\nI0408 21:27:45.744340 1701 log.go:172] (0xc0007177c0) (5) Data frame handling\nI0408 21:27:45.744358 1701 log.go:172] (0xc0007177c0) (5) Data frame sent\nI0408 21:27:45.744373 1701 log.go:172] (0xc000105290) Data frame received for 5\nI0408 21:27:45.744386 1701 log.go:172] (0xc0007177c0) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.8 30605\nConnection to 172.17.0.8 30605 port [tcp/30605] succeeded!\nI0408 21:27:45.744408 1701 log.go:172] (0xc0009be000) (3) Data frame handling\nI0408 21:27:45.745968 1701 log.go:172] (0xc000105290) Data frame received for 1\nI0408 21:27:45.745998 1701 log.go:172] (0xc000717720) (1) Data frame handling\nI0408 21:27:45.746010 1701 log.go:172] (0xc000717720) (1) Data frame sent\nI0408 21:27:45.746027 1701 log.go:172] (0xc000105290) (0xc000717720) Stream removed, broadcasting: 1\nI0408 21:27:45.746077 1701 log.go:172] (0xc000105290) Go away received\nI0408 21:27:45.746402 1701 log.go:172] (0xc000105290) (0xc000717720) Stream removed, broadcasting: 1\nI0408 21:27:45.746421 1701 log.go:172] (0xc000105290) (0xc0009be000) Stream removed, broadcasting: 3\nI0408 21:27:45.746432 1701 log.go:172] (0xc000105290) (0xc0007177c0) Stream removed, broadcasting: 5\n" Apr 8 21:27:45.750: INFO: stdout: "" Apr 8 21:27:45.750: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:27:45.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3263" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:12.129 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":278,"completed":73,"skipped":1292,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:27:45.844: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 8 21:27:45.907: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:27:47.122: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-8485" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":278,"completed":74,"skipped":1300,"failed":0} ------------------------------ [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:27:47.131: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-276.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-276.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-276.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-276.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-276.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-276.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 8 21:27:53.278: INFO: DNS probes using dns-276/dns-test-43b5b35b-35ae-4f5a-b0f6-6e33d45d32e7 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:27:53.385: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-276" for this suite. • [SLOW TEST:6.267 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":278,"completed":75,"skipped":1300,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:27:53.398: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-3dd06630-f934-44e4-a576-f67a61159ec2 STEP: Creating a pod to test consume secrets Apr 8 21:27:53.723: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-8844087c-a3a1-448a-a588-7ba63b916725" in namespace "projected-1224" to be "success or failure" Apr 8 21:27:53.751: INFO: Pod "pod-projected-secrets-8844087c-a3a1-448a-a588-7ba63b916725": Phase="Pending", Reason="", readiness=false. Elapsed: 27.847227ms Apr 8 21:27:55.754: INFO: Pod "pod-projected-secrets-8844087c-a3a1-448a-a588-7ba63b916725": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031545487s Apr 8 21:27:57.759: INFO: Pod "pod-projected-secrets-8844087c-a3a1-448a-a588-7ba63b916725": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036227772s STEP: Saw pod success Apr 8 21:27:57.759: INFO: Pod "pod-projected-secrets-8844087c-a3a1-448a-a588-7ba63b916725" satisfied condition "success or failure" Apr 8 21:27:57.762: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-8844087c-a3a1-448a-a588-7ba63b916725 container projected-secret-volume-test: STEP: delete the pod Apr 8 21:27:57.796: INFO: Waiting for pod pod-projected-secrets-8844087c-a3a1-448a-a588-7ba63b916725 to disappear Apr 8 21:27:57.810: INFO: Pod pod-projected-secrets-8844087c-a3a1-448a-a588-7ba63b916725 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:27:57.810: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1224" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":76,"skipped":1308,"failed":0} S ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:27:57.818: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name projected-secret-test-406c9755-f40c-47d1-b3a6-1a4c62019388 STEP: Creating a pod to test consume secrets Apr 8 21:27:57.896: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-8701c5c1-3dc8-48a8-91e9-9a24c8cb2b79" in namespace "projected-4212" to be "success or failure" Apr 8 21:27:57.906: INFO: Pod "pod-projected-secrets-8701c5c1-3dc8-48a8-91e9-9a24c8cb2b79": Phase="Pending", Reason="", readiness=false. Elapsed: 10.07794ms Apr 8 21:27:59.914: INFO: Pod "pod-projected-secrets-8701c5c1-3dc8-48a8-91e9-9a24c8cb2b79": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018633248s Apr 8 21:28:01.918: INFO: Pod "pod-projected-secrets-8701c5c1-3dc8-48a8-91e9-9a24c8cb2b79": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02250929s STEP: Saw pod success Apr 8 21:28:01.918: INFO: Pod "pod-projected-secrets-8701c5c1-3dc8-48a8-91e9-9a24c8cb2b79" satisfied condition "success or failure" Apr 8 21:28:01.922: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-8701c5c1-3dc8-48a8-91e9-9a24c8cb2b79 container secret-volume-test: STEP: delete the pod Apr 8 21:28:01.968: INFO: Waiting for pod pod-projected-secrets-8701c5c1-3dc8-48a8-91e9-9a24c8cb2b79 to disappear Apr 8 21:28:01.998: INFO: Pod pod-projected-secrets-8701c5c1-3dc8-48a8-91e9-9a24c8cb2b79 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:28:01.998: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4212" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":77,"skipped":1309,"failed":0} ------------------------------ [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:28:02.005: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-3503 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-3503 STEP: creating replication controller externalsvc in namespace services-3503 I0408 21:28:02.186871 6 runners.go:189] Created replication controller with name: externalsvc, namespace: services-3503, replica count: 2 I0408 21:28:05.237329 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0408 21:28:08.237531 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName Apr 8 21:28:08.271: INFO: Creating new exec pod Apr 8 21:28:12.338: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3503 execpodlwxsj -- /bin/sh -x -c nslookup clusterip-service' Apr 8 21:28:12.568: INFO: stderr: "I0408 21:28:12.464374 1723 log.go:172] (0xc0000f53f0) (0xc000679c20) Create stream\nI0408 21:28:12.464423 1723 log.go:172] (0xc0000f53f0) (0xc000679c20) Stream added, broadcasting: 1\nI0408 21:28:12.466651 1723 log.go:172] (0xc0000f53f0) Reply frame received for 1\nI0408 21:28:12.466717 1723 log.go:172] (0xc0000f53f0) (0xc00042e000) Create stream\nI0408 21:28:12.466737 1723 log.go:172] (0xc0000f53f0) (0xc00042e000) Stream added, broadcasting: 3\nI0408 21:28:12.467605 1723 log.go:172] (0xc0000f53f0) Reply frame received for 3\nI0408 21:28:12.467645 1723 log.go:172] (0xc0000f53f0) (0xc000679cc0) Create stream\nI0408 21:28:12.467660 1723 log.go:172] (0xc0000f53f0) (0xc000679cc0) Stream added, broadcasting: 5\nI0408 21:28:12.468364 1723 log.go:172] (0xc0000f53f0) Reply frame received for 5\nI0408 21:28:12.558288 1723 log.go:172] (0xc0000f53f0) Data frame received for 5\nI0408 21:28:12.558324 1723 log.go:172] (0xc000679cc0) (5) Data frame handling\nI0408 21:28:12.558351 1723 log.go:172] (0xc000679cc0) (5) Data frame sent\n+ nslookup clusterip-service\nI0408 21:28:12.560788 1723 log.go:172] (0xc0000f53f0) Data frame received for 3\nI0408 21:28:12.560806 1723 log.go:172] (0xc00042e000) (3) Data frame handling\nI0408 21:28:12.560819 1723 log.go:172] (0xc00042e000) (3) Data frame sent\nI0408 21:28:12.561666 1723 log.go:172] (0xc0000f53f0) Data frame received for 3\nI0408 21:28:12.561683 1723 log.go:172] (0xc00042e000) (3) Data frame handling\nI0408 21:28:12.561699 1723 log.go:172] (0xc00042e000) (3) Data frame sent\nI0408 21:28:12.562159 1723 log.go:172] (0xc0000f53f0) Data frame received for 3\nI0408 21:28:12.562181 1723 log.go:172] (0xc00042e000) (3) Data frame handling\nI0408 21:28:12.562212 1723 log.go:172] (0xc0000f53f0) Data frame received for 5\nI0408 21:28:12.562230 1723 log.go:172] (0xc000679cc0) (5) Data frame handling\nI0408 21:28:12.563822 1723 log.go:172] (0xc0000f53f0) Data frame received for 1\nI0408 21:28:12.563853 1723 log.go:172] (0xc000679c20) (1) Data frame handling\nI0408 21:28:12.563864 1723 log.go:172] (0xc000679c20) (1) Data frame sent\nI0408 21:28:12.563878 1723 log.go:172] (0xc0000f53f0) (0xc000679c20) Stream removed, broadcasting: 1\nI0408 21:28:12.563955 1723 log.go:172] (0xc0000f53f0) Go away received\nI0408 21:28:12.564230 1723 log.go:172] (0xc0000f53f0) (0xc000679c20) Stream removed, broadcasting: 1\nI0408 21:28:12.564249 1723 log.go:172] (0xc0000f53f0) (0xc00042e000) Stream removed, broadcasting: 3\nI0408 21:28:12.564258 1723 log.go:172] (0xc0000f53f0) (0xc000679cc0) Stream removed, broadcasting: 5\n" Apr 8 21:28:12.568: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-3503.svc.cluster.local\tcanonical name = externalsvc.services-3503.svc.cluster.local.\nName:\texternalsvc.services-3503.svc.cluster.local\nAddress: 10.103.122.117\n\n" STEP: deleting ReplicationController externalsvc in namespace services-3503, will wait for the garbage collector to delete the pods Apr 8 21:28:12.627: INFO: Deleting ReplicationController externalsvc took: 6.303859ms Apr 8 21:28:12.928: INFO: Terminating ReplicationController externalsvc pods took: 300.241267ms Apr 8 21:28:19.554: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:28:19.587: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3503" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:17.626 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":278,"completed":78,"skipped":1309,"failed":0} SSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:28:19.631: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 8 21:28:20.128: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 8 21:28:22.137: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721978100, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721978100, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721978100, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721978100, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 8 21:28:25.160: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 8 21:28:25.164: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-5641-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:28:26.562: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3312" for this suite. STEP: Destroying namespace "webhook-3312-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.044 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":278,"completed":79,"skipped":1314,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:28:26.676: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 8 21:28:26.778: INFO: Waiting up to 5m0s for pod "downwardapi-volume-69f8581b-85fe-4e03-ad5e-4f0b02b29485" in namespace "downward-api-959" to be "success or failure" Apr 8 21:28:26.795: INFO: Pod "downwardapi-volume-69f8581b-85fe-4e03-ad5e-4f0b02b29485": Phase="Pending", Reason="", readiness=false. Elapsed: 17.307271ms Apr 8 21:28:28.800: INFO: Pod "downwardapi-volume-69f8581b-85fe-4e03-ad5e-4f0b02b29485": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022127563s Apr 8 21:28:30.807: INFO: Pod "downwardapi-volume-69f8581b-85fe-4e03-ad5e-4f0b02b29485": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028998198s STEP: Saw pod success Apr 8 21:28:30.807: INFO: Pod "downwardapi-volume-69f8581b-85fe-4e03-ad5e-4f0b02b29485" satisfied condition "success or failure" Apr 8 21:28:30.810: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-69f8581b-85fe-4e03-ad5e-4f0b02b29485 container client-container: STEP: delete the pod Apr 8 21:28:30.831: INFO: Waiting for pod downwardapi-volume-69f8581b-85fe-4e03-ad5e-4f0b02b29485 to disappear Apr 8 21:28:30.835: INFO: Pod downwardapi-volume-69f8581b-85fe-4e03-ad5e-4f0b02b29485 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:28:30.835: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-959" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":80,"skipped":1344,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:28:30.843: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on tmpfs Apr 8 21:28:30.903: INFO: Waiting up to 5m0s for pod "pod-cf623269-573b-415b-8324-b5e50711da38" in namespace "emptydir-224" to be "success or failure" Apr 8 21:28:30.963: INFO: Pod "pod-cf623269-573b-415b-8324-b5e50711da38": Phase="Pending", Reason="", readiness=false. Elapsed: 59.842363ms Apr 8 21:28:32.966: INFO: Pod "pod-cf623269-573b-415b-8324-b5e50711da38": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063033738s Apr 8 21:28:34.970: INFO: Pod "pod-cf623269-573b-415b-8324-b5e50711da38": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.066837407s STEP: Saw pod success Apr 8 21:28:34.970: INFO: Pod "pod-cf623269-573b-415b-8324-b5e50711da38" satisfied condition "success or failure" Apr 8 21:28:34.972: INFO: Trying to get logs from node jerma-worker2 pod pod-cf623269-573b-415b-8324-b5e50711da38 container test-container: STEP: delete the pod Apr 8 21:28:35.012: INFO: Waiting for pod pod-cf623269-573b-415b-8324-b5e50711da38 to disappear Apr 8 21:28:35.045: INFO: Pod pod-cf623269-573b-415b-8324-b5e50711da38 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:28:35.045: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-224" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":81,"skipped":1370,"failed":0} ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:28:35.053: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Apr 8 21:28:43.208: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 8 21:28:43.226: INFO: Pod pod-with-poststart-exec-hook still exists Apr 8 21:28:45.226: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 8 21:28:45.230: INFO: Pod pod-with-poststart-exec-hook still exists Apr 8 21:28:47.226: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 8 21:28:47.229: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:28:47.229: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-6611" for this suite. • [SLOW TEST:12.183 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":278,"completed":82,"skipped":1370,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:28:47.236: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:29:00.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9474" for this suite. • [SLOW TEST:13.160 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":278,"completed":83,"skipped":1393,"failed":0} SSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:29:00.397: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-2e336c2b-28e1-4006-a427-0cfafe6a6b78 STEP: Creating a pod to test consume secrets Apr 8 21:29:00.491: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-3c8f328e-5800-4397-8716-184e85bca2f4" in namespace "projected-41" to be "success or failure" Apr 8 21:29:00.513: INFO: Pod "pod-projected-secrets-3c8f328e-5800-4397-8716-184e85bca2f4": Phase="Pending", Reason="", readiness=false. Elapsed: 22.19064ms Apr 8 21:29:02.538: INFO: Pod "pod-projected-secrets-3c8f328e-5800-4397-8716-184e85bca2f4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047217859s Apr 8 21:29:04.542: INFO: Pod "pod-projected-secrets-3c8f328e-5800-4397-8716-184e85bca2f4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.051458995s STEP: Saw pod success Apr 8 21:29:04.542: INFO: Pod "pod-projected-secrets-3c8f328e-5800-4397-8716-184e85bca2f4" satisfied condition "success or failure" Apr 8 21:29:04.545: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-3c8f328e-5800-4397-8716-184e85bca2f4 container projected-secret-volume-test: STEP: delete the pod Apr 8 21:29:04.562: INFO: Waiting for pod pod-projected-secrets-3c8f328e-5800-4397-8716-184e85bca2f4 to disappear Apr 8 21:29:04.566: INFO: Pod pod-projected-secrets-3c8f328e-5800-4397-8716-184e85bca2f4 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:29:04.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-41" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":84,"skipped":1398,"failed":0} SS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:29:04.573: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Apr 8 21:29:04.629: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-7598 /api/v1/namespaces/watch-7598/configmaps/e2e-watch-test-watch-closed a5b41b31-dda9-4a37-b720-db12b0d13e57 6508195 0 2020-04-08 21:29:04 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Apr 8 21:29:04.629: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-7598 /api/v1/namespaces/watch-7598/configmaps/e2e-watch-test-watch-closed a5b41b31-dda9-4a37-b720-db12b0d13e57 6508196 0 2020-04-08 21:29:04 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Apr 8 21:29:04.651: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-7598 /api/v1/namespaces/watch-7598/configmaps/e2e-watch-test-watch-closed a5b41b31-dda9-4a37-b720-db12b0d13e57 6508197 0 2020-04-08 21:29:04 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Apr 8 21:29:04.651: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-7598 /api/v1/namespaces/watch-7598/configmaps/e2e-watch-test-watch-closed a5b41b31-dda9-4a37-b720-db12b0d13e57 6508198 0 2020-04-08 21:29:04 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:29:04.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-7598" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":278,"completed":85,"skipped":1400,"failed":0} SSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:29:04.728: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-7e0c72e4-aaf8-4cd7-aa01-26c84cc4a02a STEP: Creating a pod to test consume configMaps Apr 8 21:29:04.802: INFO: Waiting up to 5m0s for pod "pod-configmaps-a3aee0ee-3145-4305-b694-52e6cb070b9c" in namespace "configmap-5575" to be "success or failure" Apr 8 21:29:04.806: INFO: Pod "pod-configmaps-a3aee0ee-3145-4305-b694-52e6cb070b9c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.623795ms Apr 8 21:29:06.811: INFO: Pod "pod-configmaps-a3aee0ee-3145-4305-b694-52e6cb070b9c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008125332s Apr 8 21:29:08.815: INFO: Pod "pod-configmaps-a3aee0ee-3145-4305-b694-52e6cb070b9c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012687637s STEP: Saw pod success Apr 8 21:29:08.815: INFO: Pod "pod-configmaps-a3aee0ee-3145-4305-b694-52e6cb070b9c" satisfied condition "success or failure" Apr 8 21:29:08.818: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-a3aee0ee-3145-4305-b694-52e6cb070b9c container configmap-volume-test: STEP: delete the pod Apr 8 21:29:08.850: INFO: Waiting for pod pod-configmaps-a3aee0ee-3145-4305-b694-52e6cb070b9c to disappear Apr 8 21:29:08.879: INFO: Pod pod-configmaps-a3aee0ee-3145-4305-b694-52e6cb070b9c no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:29:08.879: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5575" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":86,"skipped":1403,"failed":0} SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:29:08.887: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on tmpfs Apr 8 21:29:08.958: INFO: Waiting up to 5m0s for pod "pod-f2702c52-f6a0-4f1e-97e5-e995606f4d69" in namespace "emptydir-7717" to be "success or failure" Apr 8 21:29:08.962: INFO: Pod "pod-f2702c52-f6a0-4f1e-97e5-e995606f4d69": Phase="Pending", Reason="", readiness=false. Elapsed: 3.285157ms Apr 8 21:29:10.966: INFO: Pod "pod-f2702c52-f6a0-4f1e-97e5-e995606f4d69": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007378639s Apr 8 21:29:12.970: INFO: Pod "pod-f2702c52-f6a0-4f1e-97e5-e995606f4d69": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011577306s STEP: Saw pod success Apr 8 21:29:12.970: INFO: Pod "pod-f2702c52-f6a0-4f1e-97e5-e995606f4d69" satisfied condition "success or failure" Apr 8 21:29:12.973: INFO: Trying to get logs from node jerma-worker2 pod pod-f2702c52-f6a0-4f1e-97e5-e995606f4d69 container test-container: STEP: delete the pod Apr 8 21:29:13.008: INFO: Waiting for pod pod-f2702c52-f6a0-4f1e-97e5-e995606f4d69 to disappear Apr 8 21:29:13.022: INFO: Pod pod-f2702c52-f6a0-4f1e-97e5-e995606f4d69 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:29:13.022: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7717" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":87,"skipped":1410,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:29:13.045: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service multi-endpoint-test in namespace services-5488 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5488 to expose endpoints map[] Apr 8 21:29:13.208: INFO: Get endpoints failed (4.561898ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found Apr 8 21:29:14.214: INFO: successfully validated that service multi-endpoint-test in namespace services-5488 exposes endpoints map[] (1.010563905s elapsed) STEP: Creating pod pod1 in namespace services-5488 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5488 to expose endpoints map[pod1:[100]] Apr 8 21:29:17.268: INFO: successfully validated that service multi-endpoint-test in namespace services-5488 exposes endpoints map[pod1:[100]] (3.048011521s elapsed) STEP: Creating pod pod2 in namespace services-5488 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5488 to expose endpoints map[pod1:[100] pod2:[101]] Apr 8 21:29:20.398: INFO: successfully validated that service multi-endpoint-test in namespace services-5488 exposes endpoints map[pod1:[100] pod2:[101]] (3.125851957s elapsed) STEP: Deleting pod pod1 in namespace services-5488 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5488 to expose endpoints map[pod2:[101]] Apr 8 21:29:21.455: INFO: successfully validated that service multi-endpoint-test in namespace services-5488 exposes endpoints map[pod2:[101]] (1.051416021s elapsed) STEP: Deleting pod pod2 in namespace services-5488 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5488 to expose endpoints map[] Apr 8 21:29:22.482: INFO: successfully validated that service multi-endpoint-test in namespace services-5488 exposes endpoints map[] (1.02253294s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:29:22.574: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5488" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:9.582 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":278,"completed":88,"skipped":1428,"failed":0} SSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:29:22.628: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-960cdde5-e58e-4758-993a-847718730da9 STEP: Creating a pod to test consume configMaps Apr 8 21:29:22.756: INFO: Waiting up to 5m0s for pod "pod-configmaps-50354cc4-d735-418f-bbd0-c30c5f82477a" in namespace "configmap-4218" to be "success or failure" Apr 8 21:29:22.771: INFO: Pod "pod-configmaps-50354cc4-d735-418f-bbd0-c30c5f82477a": Phase="Pending", Reason="", readiness=false. Elapsed: 15.245313ms Apr 8 21:29:24.838: INFO: Pod "pod-configmaps-50354cc4-d735-418f-bbd0-c30c5f82477a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.081744139s Apr 8 21:29:26.842: INFO: Pod "pod-configmaps-50354cc4-d735-418f-bbd0-c30c5f82477a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.085911228s STEP: Saw pod success Apr 8 21:29:26.842: INFO: Pod "pod-configmaps-50354cc4-d735-418f-bbd0-c30c5f82477a" satisfied condition "success or failure" Apr 8 21:29:26.846: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-50354cc4-d735-418f-bbd0-c30c5f82477a container configmap-volume-test: STEP: delete the pod Apr 8 21:29:26.869: INFO: Waiting for pod pod-configmaps-50354cc4-d735-418f-bbd0-c30c5f82477a to disappear Apr 8 21:29:26.872: INFO: Pod pod-configmaps-50354cc4-d735-418f-bbd0-c30c5f82477a no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:29:26.872: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4218" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":89,"skipped":1434,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:29:26.880: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Apr 8 21:29:26.972: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-902 /api/v1/namespaces/watch-902/configmaps/e2e-watch-test-label-changed 60238e9a-d182-4892-a83d-3be66dbb344d 6508400 0 2020-04-08 21:29:26 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Apr 8 21:29:26.972: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-902 /api/v1/namespaces/watch-902/configmaps/e2e-watch-test-label-changed 60238e9a-d182-4892-a83d-3be66dbb344d 6508401 0 2020-04-08 21:29:26 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Apr 8 21:29:26.972: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-902 /api/v1/namespaces/watch-902/configmaps/e2e-watch-test-label-changed 60238e9a-d182-4892-a83d-3be66dbb344d 6508402 0 2020-04-08 21:29:26 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Apr 8 21:29:37.006: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-902 /api/v1/namespaces/watch-902/configmaps/e2e-watch-test-label-changed 60238e9a-d182-4892-a83d-3be66dbb344d 6508460 0 2020-04-08 21:29:26 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Apr 8 21:29:37.006: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-902 /api/v1/namespaces/watch-902/configmaps/e2e-watch-test-label-changed 60238e9a-d182-4892-a83d-3be66dbb344d 6508461 0 2020-04-08 21:29:26 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} Apr 8 21:29:37.006: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-902 /api/v1/namespaces/watch-902/configmaps/e2e-watch-test-label-changed 60238e9a-d182-4892-a83d-3be66dbb344d 6508462 0 2020-04-08 21:29:26 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:29:37.007: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-902" for this suite. • [SLOW TEST:10.168 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":278,"completed":90,"skipped":1458,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:29:37.049: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Apr 8 21:29:37.091: INFO: PodSpec: initContainers in spec.initContainers Apr 8 21:30:29.096: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-b7ac7fce-2727-4f1b-9cbd-e923d81a8b7d", GenerateName:"", Namespace:"init-container-5986", SelfLink:"/api/v1/namespaces/init-container-5986/pods/pod-init-b7ac7fce-2727-4f1b-9cbd-e923d81a8b7d", UID:"96bf2724-e8a9-4a09-a0ab-0c4355e3fead", ResourceVersion:"6508650", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63721978177, loc:(*time.Location)(0x78ee080)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"91310309"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-wgq2s", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc002f1a580), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-wgq2s", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-wgq2s", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-wgq2s", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002d87d18), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"jerma-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc00341dbc0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002d87da0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002d87dc0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc002d87dc8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc002d87dcc), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721978177, loc:(*time.Location)(0x78ee080)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721978177, loc:(*time.Location)(0x78ee080)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721978177, loc:(*time.Location)(0x78ee080)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721978177, loc:(*time.Location)(0x78ee080)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.10", PodIP:"10.244.1.123", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.1.123"}}, StartTime:(*v1.Time)(0xc001ff6220), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0027f8fc0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0027f9030)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://9af99a7df58d7fbeea9ef26907f65a2f87bfb5aa4c093e65686951ddae0bba22", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc001ff6260), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc001ff6240), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:"", Started:(*bool)(0xc002d87e4f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:30:29.097: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-5986" for this suite. • [SLOW TEST:52.082 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":278,"completed":91,"skipped":1508,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:30:29.132: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Apr 8 21:30:29.239: INFO: Pod name pod-release: Found 0 pods out of 1 Apr 8 21:30:34.243: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:30:34.306: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-7335" for this suite. • [SLOW TEST:5.199 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":278,"completed":92,"skipped":1544,"failed":0} SSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:30:34.332: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 8 21:30:35.084: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 8 21:30:37.126: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721978235, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721978235, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721978235, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721978235, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 8 21:30:40.162: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:30:40.348: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6291" for this suite. STEP: Destroying namespace "webhook-6291-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.244 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":278,"completed":93,"skipped":1548,"failed":0} SSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:30:40.576: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:30:40.827: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1990" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 •{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":278,"completed":94,"skipped":1552,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:30:40.834: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override arguments Apr 8 21:30:41.056: INFO: Waiting up to 5m0s for pod "client-containers-26123043-2c4f-47a4-a566-9b4873927f30" in namespace "containers-6524" to be "success or failure" Apr 8 21:30:41.066: INFO: Pod "client-containers-26123043-2c4f-47a4-a566-9b4873927f30": Phase="Pending", Reason="", readiness=false. Elapsed: 9.201627ms Apr 8 21:30:43.069: INFO: Pod "client-containers-26123043-2c4f-47a4-a566-9b4873927f30": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012908582s Apr 8 21:30:45.073: INFO: Pod "client-containers-26123043-2c4f-47a4-a566-9b4873927f30": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016689047s STEP: Saw pod success Apr 8 21:30:45.073: INFO: Pod "client-containers-26123043-2c4f-47a4-a566-9b4873927f30" satisfied condition "success or failure" Apr 8 21:30:45.076: INFO: Trying to get logs from node jerma-worker pod client-containers-26123043-2c4f-47a4-a566-9b4873927f30 container test-container: STEP: delete the pod Apr 8 21:30:45.103: INFO: Waiting for pod client-containers-26123043-2c4f-47a4-a566-9b4873927f30 to disappear Apr 8 21:30:45.107: INFO: Pod client-containers-26123043-2c4f-47a4-a566-9b4873927f30 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:30:45.107: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-6524" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":278,"completed":95,"skipped":1567,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:30:45.117: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-2940 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 8 21:30:45.211: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Apr 8 21:31:11.331: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.127:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-2940 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 8 21:31:11.331: INFO: >>> kubeConfig: /root/.kube/config I0408 21:31:11.392154 6 log.go:172] (0xc0016c0160) (0xc00135cd20) Create stream I0408 21:31:11.392200 6 log.go:172] (0xc0016c0160) (0xc00135cd20) Stream added, broadcasting: 1 I0408 21:31:11.394524 6 log.go:172] (0xc0016c0160) Reply frame received for 1 I0408 21:31:11.394567 6 log.go:172] (0xc0016c0160) (0xc001118320) Create stream I0408 21:31:11.394581 6 log.go:172] (0xc0016c0160) (0xc001118320) Stream added, broadcasting: 3 I0408 21:31:11.395597 6 log.go:172] (0xc0016c0160) Reply frame received for 3 I0408 21:31:11.395635 6 log.go:172] (0xc0016c0160) (0xc00135cf00) Create stream I0408 21:31:11.395650 6 log.go:172] (0xc0016c0160) (0xc00135cf00) Stream added, broadcasting: 5 I0408 21:31:11.396490 6 log.go:172] (0xc0016c0160) Reply frame received for 5 I0408 21:31:11.496895 6 log.go:172] (0xc0016c0160) Data frame received for 3 I0408 21:31:11.496936 6 log.go:172] (0xc001118320) (3) Data frame handling I0408 21:31:11.496968 6 log.go:172] (0xc001118320) (3) Data frame sent I0408 21:31:11.497384 6 log.go:172] (0xc0016c0160) Data frame received for 5 I0408 21:31:11.497432 6 log.go:172] (0xc00135cf00) (5) Data frame handling I0408 21:31:11.497483 6 log.go:172] (0xc0016c0160) Data frame received for 3 I0408 21:31:11.497518 6 log.go:172] (0xc001118320) (3) Data frame handling I0408 21:31:11.499258 6 log.go:172] (0xc0016c0160) Data frame received for 1 I0408 21:31:11.499275 6 log.go:172] (0xc00135cd20) (1) Data frame handling I0408 21:31:11.499301 6 log.go:172] (0xc00135cd20) (1) Data frame sent I0408 21:31:11.499318 6 log.go:172] (0xc0016c0160) (0xc00135cd20) Stream removed, broadcasting: 1 I0408 21:31:11.499376 6 log.go:172] (0xc0016c0160) (0xc00135cd20) Stream removed, broadcasting: 1 I0408 21:31:11.499394 6 log.go:172] (0xc0016c0160) (0xc001118320) Stream removed, broadcasting: 3 I0408 21:31:11.499413 6 log.go:172] (0xc0016c0160) (0xc00135cf00) Stream removed, broadcasting: 5 I0408 21:31:11.499438 6 log.go:172] (0xc0016c0160) Go away received Apr 8 21:31:11.499: INFO: Found all expected endpoints: [netserver-0] Apr 8 21:31:11.503: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.191:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-2940 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 8 21:31:11.503: INFO: >>> kubeConfig: /root/.kube/config I0408 21:31:11.535851 6 log.go:172] (0xc001a6ed10) (0xc001119040) Create stream I0408 21:31:11.535874 6 log.go:172] (0xc001a6ed10) (0xc001119040) Stream added, broadcasting: 1 I0408 21:31:11.538217 6 log.go:172] (0xc001a6ed10) Reply frame received for 1 I0408 21:31:11.538247 6 log.go:172] (0xc001a6ed10) (0xc0013385a0) Create stream I0408 21:31:11.538262 6 log.go:172] (0xc001a6ed10) (0xc0013385a0) Stream added, broadcasting: 3 I0408 21:31:11.539319 6 log.go:172] (0xc001a6ed10) Reply frame received for 3 I0408 21:31:11.539360 6 log.go:172] (0xc001a6ed10) (0xc001ebd220) Create stream I0408 21:31:11.539377 6 log.go:172] (0xc001a6ed10) (0xc001ebd220) Stream added, broadcasting: 5 I0408 21:31:11.540315 6 log.go:172] (0xc001a6ed10) Reply frame received for 5 I0408 21:31:11.606630 6 log.go:172] (0xc001a6ed10) Data frame received for 5 I0408 21:31:11.606652 6 log.go:172] (0xc001ebd220) (5) Data frame handling I0408 21:31:11.606696 6 log.go:172] (0xc001a6ed10) Data frame received for 3 I0408 21:31:11.606727 6 log.go:172] (0xc0013385a0) (3) Data frame handling I0408 21:31:11.606756 6 log.go:172] (0xc0013385a0) (3) Data frame sent I0408 21:31:11.607029 6 log.go:172] (0xc001a6ed10) Data frame received for 3 I0408 21:31:11.607054 6 log.go:172] (0xc0013385a0) (3) Data frame handling I0408 21:31:11.608136 6 log.go:172] (0xc001a6ed10) Data frame received for 1 I0408 21:31:11.608153 6 log.go:172] (0xc001119040) (1) Data frame handling I0408 21:31:11.608159 6 log.go:172] (0xc001119040) (1) Data frame sent I0408 21:31:11.608301 6 log.go:172] (0xc001a6ed10) (0xc001119040) Stream removed, broadcasting: 1 I0408 21:31:11.608336 6 log.go:172] (0xc001a6ed10) Go away received I0408 21:31:11.608442 6 log.go:172] (0xc001a6ed10) (0xc001119040) Stream removed, broadcasting: 1 I0408 21:31:11.608467 6 log.go:172] (0xc001a6ed10) (0xc0013385a0) Stream removed, broadcasting: 3 I0408 21:31:11.608487 6 log.go:172] (0xc001a6ed10) (0xc001ebd220) Stream removed, broadcasting: 5 Apr 8 21:31:11.608: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:31:11.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-2940" for this suite. • [SLOW TEST:26.500 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":96,"skipped":1621,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:31:11.617: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-f6e900af-152e-4812-a230-aedff71586c6 STEP: Creating a pod to test consume configMaps Apr 8 21:31:11.712: INFO: Waiting up to 5m0s for pod "pod-configmaps-ef95c1c4-be54-468c-adfe-9804378fe2a2" in namespace "configmap-4363" to be "success or failure" Apr 8 21:31:11.724: INFO: Pod "pod-configmaps-ef95c1c4-be54-468c-adfe-9804378fe2a2": Phase="Pending", Reason="", readiness=false. Elapsed: 11.72471ms Apr 8 21:31:13.727: INFO: Pod "pod-configmaps-ef95c1c4-be54-468c-adfe-9804378fe2a2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015087012s Apr 8 21:31:15.731: INFO: Pod "pod-configmaps-ef95c1c4-be54-468c-adfe-9804378fe2a2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018586084s STEP: Saw pod success Apr 8 21:31:15.731: INFO: Pod "pod-configmaps-ef95c1c4-be54-468c-adfe-9804378fe2a2" satisfied condition "success or failure" Apr 8 21:31:15.733: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-ef95c1c4-be54-468c-adfe-9804378fe2a2 container configmap-volume-test: STEP: delete the pod Apr 8 21:31:15.798: INFO: Waiting for pod pod-configmaps-ef95c1c4-be54-468c-adfe-9804378fe2a2 to disappear Apr 8 21:31:15.814: INFO: Pod pod-configmaps-ef95c1c4-be54-468c-adfe-9804378fe2a2 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:31:15.814: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4363" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":97,"skipped":1643,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:31:15.821: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Apr 8 21:31:22.560: INFO: 0 pods remaining Apr 8 21:31:22.560: INFO: 0 pods has nil DeletionTimestamp Apr 8 21:31:22.560: INFO: STEP: Gathering metrics W0408 21:31:23.606722 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 8 21:31:23.606: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:31:23.606: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4879" for this suite. • [SLOW TEST:8.444 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":278,"completed":98,"skipped":1649,"failed":0} SSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:31:24.264: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:31:28.973: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-7248" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":278,"completed":99,"skipped":1655,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:31:28.981: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 8 21:31:29.069: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f999244d-7a6c-4e5c-b51f-6101da42633a" in namespace "projected-7213" to be "success or failure" Apr 8 21:31:29.085: INFO: Pod "downwardapi-volume-f999244d-7a6c-4e5c-b51f-6101da42633a": Phase="Pending", Reason="", readiness=false. Elapsed: 16.126705ms Apr 8 21:31:31.103: INFO: Pod "downwardapi-volume-f999244d-7a6c-4e5c-b51f-6101da42633a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034018921s Apr 8 21:31:33.110: INFO: Pod "downwardapi-volume-f999244d-7a6c-4e5c-b51f-6101da42633a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.04131635s STEP: Saw pod success Apr 8 21:31:33.110: INFO: Pod "downwardapi-volume-f999244d-7a6c-4e5c-b51f-6101da42633a" satisfied condition "success or failure" Apr 8 21:31:33.113: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-f999244d-7a6c-4e5c-b51f-6101da42633a container client-container: STEP: delete the pod Apr 8 21:31:33.166: INFO: Waiting for pod downwardapi-volume-f999244d-7a6c-4e5c-b51f-6101da42633a to disappear Apr 8 21:31:33.170: INFO: Pod downwardapi-volume-f999244d-7a6c-4e5c-b51f-6101da42633a no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:31:33.170: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7213" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":100,"skipped":1668,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:31:33.178: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 Apr 8 21:31:33.214: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the sample API server. Apr 8 21:31:33.655: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Apr 8 21:31:35.767: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721978293, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721978293, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721978293, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721978293, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 8 21:31:38.319: INFO: Waited 541.779731ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:31:38.821: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-288" for this suite. • [SLOW TEST:5.670 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]","total":278,"completed":101,"skipped":1675,"failed":0} [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:31:38.848: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1489 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Apr 8 21:31:39.021: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-203' Apr 8 21:31:39.159: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Apr 8 21:31:39.159: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n" STEP: verifying the pod controlled by e2e-test-httpd-deployment gets created [AfterEach] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1495 Apr 8 21:31:41.209: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-203' Apr 8 21:31:41.320: INFO: stderr: "" Apr 8 21:31:41.320: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:31:41.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-203" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image [Conformance]","total":278,"completed":102,"skipped":1675,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:31:41.339: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 8 21:31:41.824: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 8 21:31:43.833: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721978301, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721978301, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721978301, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721978301, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 8 21:31:46.870: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:31:47.197: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5143" for this suite. STEP: Destroying namespace "webhook-5143-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.153 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":278,"completed":103,"skipped":1681,"failed":0} SS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:31:47.492: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Apr 8 21:31:55.894: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 8 21:31:55.901: INFO: Pod pod-with-prestop-http-hook still exists Apr 8 21:31:57.901: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 8 21:31:57.917: INFO: Pod pod-with-prestop-http-hook still exists Apr 8 21:31:59.901: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 8 21:31:59.916: INFO: Pod pod-with-prestop-http-hook still exists Apr 8 21:32:01.901: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 8 21:32:01.906: INFO: Pod pod-with-prestop-http-hook still exists Apr 8 21:32:03.901: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 8 21:32:03.906: INFO: Pod pod-with-prestop-http-hook still exists Apr 8 21:32:05.901: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 8 21:32:05.906: INFO: Pod pod-with-prestop-http-hook still exists Apr 8 21:32:07.901: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 8 21:32:07.905: INFO: Pod pod-with-prestop-http-hook still exists Apr 8 21:32:09.901: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 8 21:32:09.905: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:32:09.913: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-1831" for this suite. • [SLOW TEST:22.430 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":278,"completed":104,"skipped":1683,"failed":0} SSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:32:09.923: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 8 21:32:10.000: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4360c2d0-9fa1-4057-9b5c-ab93eb378511" in namespace "downward-api-1455" to be "success or failure" Apr 8 21:32:10.031: INFO: Pod "downwardapi-volume-4360c2d0-9fa1-4057-9b5c-ab93eb378511": Phase="Pending", Reason="", readiness=false. Elapsed: 31.687022ms Apr 8 21:32:12.035: INFO: Pod "downwardapi-volume-4360c2d0-9fa1-4057-9b5c-ab93eb378511": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035315812s Apr 8 21:32:14.040: INFO: Pod "downwardapi-volume-4360c2d0-9fa1-4057-9b5c-ab93eb378511": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.040036945s STEP: Saw pod success Apr 8 21:32:14.040: INFO: Pod "downwardapi-volume-4360c2d0-9fa1-4057-9b5c-ab93eb378511" satisfied condition "success or failure" Apr 8 21:32:14.043: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-4360c2d0-9fa1-4057-9b5c-ab93eb378511 container client-container: STEP: delete the pod Apr 8 21:32:14.109: INFO: Waiting for pod downwardapi-volume-4360c2d0-9fa1-4057-9b5c-ab93eb378511 to disappear Apr 8 21:32:14.111: INFO: Pod downwardapi-volume-4360c2d0-9fa1-4057-9b5c-ab93eb378511 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:32:14.111: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1455" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":105,"skipped":1688,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:32:14.120: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on tmpfs Apr 8 21:32:14.180: INFO: Waiting up to 5m0s for pod "pod-bbdedbed-19a8-42a5-8345-3e230a6bb7ee" in namespace "emptydir-2825" to be "success or failure" Apr 8 21:32:14.184: INFO: Pod "pod-bbdedbed-19a8-42a5-8345-3e230a6bb7ee": Phase="Pending", Reason="", readiness=false. Elapsed: 3.849816ms Apr 8 21:32:16.196: INFO: Pod "pod-bbdedbed-19a8-42a5-8345-3e230a6bb7ee": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016255238s Apr 8 21:32:18.200: INFO: Pod "pod-bbdedbed-19a8-42a5-8345-3e230a6bb7ee": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020469933s STEP: Saw pod success Apr 8 21:32:18.200: INFO: Pod "pod-bbdedbed-19a8-42a5-8345-3e230a6bb7ee" satisfied condition "success or failure" Apr 8 21:32:18.204: INFO: Trying to get logs from node jerma-worker pod pod-bbdedbed-19a8-42a5-8345-3e230a6bb7ee container test-container: STEP: delete the pod Apr 8 21:32:18.269: INFO: Waiting for pod pod-bbdedbed-19a8-42a5-8345-3e230a6bb7ee to disappear Apr 8 21:32:18.274: INFO: Pod pod-bbdedbed-19a8-42a5-8345-3e230a6bb7ee no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:32:18.274: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2825" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":106,"skipped":1728,"failed":0} SSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:32:18.281: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 8 21:32:18.324: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:32:22.461: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6199" for this suite. •{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":278,"completed":107,"skipped":1731,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:32:22.470: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 8 21:32:22.605: INFO: Waiting up to 5m0s for pod "busybox-user-65534-5a61c566-b7f6-4a7a-b26c-ea95b8640264" in namespace "security-context-test-6240" to be "success or failure" Apr 8 21:32:22.620: INFO: Pod "busybox-user-65534-5a61c566-b7f6-4a7a-b26c-ea95b8640264": Phase="Pending", Reason="", readiness=false. Elapsed: 14.503727ms Apr 8 21:32:24.631: INFO: Pod "busybox-user-65534-5a61c566-b7f6-4a7a-b26c-ea95b8640264": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025510941s Apr 8 21:32:26.635: INFO: Pod "busybox-user-65534-5a61c566-b7f6-4a7a-b26c-ea95b8640264": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029379688s Apr 8 21:32:26.635: INFO: Pod "busybox-user-65534-5a61c566-b7f6-4a7a-b26c-ea95b8640264" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:32:26.635: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-6240" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":108,"skipped":1746,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:32:26.643: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Ensuring resource quota status captures service creation STEP: Deleting a Service STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:32:37.879: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9712" for this suite. • [SLOW TEST:11.245 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":278,"completed":109,"skipped":1759,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:32:37.889: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 8 21:32:37.940: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c9e86521-1a00-49ba-ab80-39d43955506c" in namespace "projected-4920" to be "success or failure" Apr 8 21:32:37.945: INFO: Pod "downwardapi-volume-c9e86521-1a00-49ba-ab80-39d43955506c": Phase="Pending", Reason="", readiness=false. Elapsed: 5.394468ms Apr 8 21:32:39.949: INFO: Pod "downwardapi-volume-c9e86521-1a00-49ba-ab80-39d43955506c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008786807s Apr 8 21:32:41.953: INFO: Pod "downwardapi-volume-c9e86521-1a00-49ba-ab80-39d43955506c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013036872s STEP: Saw pod success Apr 8 21:32:41.953: INFO: Pod "downwardapi-volume-c9e86521-1a00-49ba-ab80-39d43955506c" satisfied condition "success or failure" Apr 8 21:32:41.956: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-c9e86521-1a00-49ba-ab80-39d43955506c container client-container: STEP: delete the pod Apr 8 21:32:41.975: INFO: Waiting for pod downwardapi-volume-c9e86521-1a00-49ba-ab80-39d43955506c to disappear Apr 8 21:32:41.994: INFO: Pod downwardapi-volume-c9e86521-1a00-49ba-ab80-39d43955506c no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:32:41.994: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4920" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":110,"skipped":1780,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:32:42.002: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 8 21:32:42.082: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d60b16fb-f9f3-408c-9d55-c9e99ff93ff7" in namespace "downward-api-4418" to be "success or failure" Apr 8 21:32:42.089: INFO: Pod "downwardapi-volume-d60b16fb-f9f3-408c-9d55-c9e99ff93ff7": Phase="Pending", Reason="", readiness=false. Elapsed: 7.260805ms Apr 8 21:32:44.095: INFO: Pod "downwardapi-volume-d60b16fb-f9f3-408c-9d55-c9e99ff93ff7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013600857s Apr 8 21:32:46.100: INFO: Pod "downwardapi-volume-d60b16fb-f9f3-408c-9d55-c9e99ff93ff7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018054865s STEP: Saw pod success Apr 8 21:32:46.100: INFO: Pod "downwardapi-volume-d60b16fb-f9f3-408c-9d55-c9e99ff93ff7" satisfied condition "success or failure" Apr 8 21:32:46.103: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-d60b16fb-f9f3-408c-9d55-c9e99ff93ff7 container client-container: STEP: delete the pod Apr 8 21:32:46.135: INFO: Waiting for pod downwardapi-volume-d60b16fb-f9f3-408c-9d55-c9e99ff93ff7 to disappear Apr 8 21:32:46.149: INFO: Pod downwardapi-volume-d60b16fb-f9f3-408c-9d55-c9e99ff93ff7 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:32:46.149: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4418" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":111,"skipped":1789,"failed":0} SS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:32:46.158: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating secret secrets-5323/secret-test-e5f14b0c-2165-4051-9411-3b0067e0ac98 STEP: Creating a pod to test consume secrets Apr 8 21:32:46.241: INFO: Waiting up to 5m0s for pod "pod-configmaps-419845ea-4d04-43b2-aeee-785aff6242fe" in namespace "secrets-5323" to be "success or failure" Apr 8 21:32:46.264: INFO: Pod "pod-configmaps-419845ea-4d04-43b2-aeee-785aff6242fe": Phase="Pending", Reason="", readiness=false. Elapsed: 23.13591ms Apr 8 21:32:48.268: INFO: Pod "pod-configmaps-419845ea-4d04-43b2-aeee-785aff6242fe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027230009s Apr 8 21:32:50.272: INFO: Pod "pod-configmaps-419845ea-4d04-43b2-aeee-785aff6242fe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031336155s STEP: Saw pod success Apr 8 21:32:50.272: INFO: Pod "pod-configmaps-419845ea-4d04-43b2-aeee-785aff6242fe" satisfied condition "success or failure" Apr 8 21:32:50.276: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-419845ea-4d04-43b2-aeee-785aff6242fe container env-test: STEP: delete the pod Apr 8 21:32:50.307: INFO: Waiting for pod pod-configmaps-419845ea-4d04-43b2-aeee-785aff6242fe to disappear Apr 8 21:32:50.310: INFO: Pod pod-configmaps-419845ea-4d04-43b2-aeee-785aff6242fe no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:32:50.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5323" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":112,"skipped":1791,"failed":0} ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:32:50.318: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 8 21:32:51.196: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 8 21:32:53.397: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721978371, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721978371, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721978371, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721978371, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 8 21:32:56.481: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 8 21:32:56.485: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API Apr 8 21:32:57.092: INFO: Waiting for webhook configuration to be ready... STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:32:57.802: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6965" for this suite. STEP: Destroying namespace "webhook-6965-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.713 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":278,"completed":113,"skipped":1791,"failed":0} SSSSSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:32:58.032: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 8 21:32:58.065: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Apr 8 21:33:00.141: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:33:01.156: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-731" for this suite. •{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":278,"completed":114,"skipped":1799,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:33:01.166: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 8 21:33:02.307: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 8 21:33:04.316: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721978382, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721978382, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721978382, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721978382, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 8 21:33:07.644: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:33:08.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8103" for this suite. STEP: Destroying namespace "webhook-8103-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.014 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":278,"completed":115,"skipped":1813,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:33:08.181: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:33:12.345: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-4015" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":278,"completed":116,"skipped":1847,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:33:12.353: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:33:28.473: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9069" for this suite. • [SLOW TEST:16.130 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":278,"completed":117,"skipped":1858,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:33:28.483: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-5331 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-5331 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-5331 Apr 8 21:33:28.594: INFO: Found 0 stateful pods, waiting for 1 Apr 8 21:33:38.599: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Apr 8 21:33:38.603: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5331 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 8 21:33:38.876: INFO: stderr: "I0408 21:33:38.734177 1785 log.go:172] (0xc000105600) (0xc000914000) Create stream\nI0408 21:33:38.734251 1785 log.go:172] (0xc000105600) (0xc000914000) Stream added, broadcasting: 1\nI0408 21:33:38.736518 1785 log.go:172] (0xc000105600) Reply frame received for 1\nI0408 21:33:38.736566 1785 log.go:172] (0xc000105600) (0xc0009140a0) Create stream\nI0408 21:33:38.736585 1785 log.go:172] (0xc000105600) (0xc0009140a0) Stream added, broadcasting: 3\nI0408 21:33:38.737937 1785 log.go:172] (0xc000105600) Reply frame received for 3\nI0408 21:33:38.737975 1785 log.go:172] (0xc000105600) (0xc0009ea000) Create stream\nI0408 21:33:38.737985 1785 log.go:172] (0xc000105600) (0xc0009ea000) Stream added, broadcasting: 5\nI0408 21:33:38.738743 1785 log.go:172] (0xc000105600) Reply frame received for 5\nI0408 21:33:38.841609 1785 log.go:172] (0xc000105600) Data frame received for 5\nI0408 21:33:38.841634 1785 log.go:172] (0xc0009ea000) (5) Data frame handling\nI0408 21:33:38.841648 1785 log.go:172] (0xc0009ea000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0408 21:33:38.867384 1785 log.go:172] (0xc000105600) Data frame received for 3\nI0408 21:33:38.867433 1785 log.go:172] (0xc0009140a0) (3) Data frame handling\nI0408 21:33:38.867472 1785 log.go:172] (0xc0009140a0) (3) Data frame sent\nI0408 21:33:38.867688 1785 log.go:172] (0xc000105600) Data frame received for 3\nI0408 21:33:38.867733 1785 log.go:172] (0xc0009140a0) (3) Data frame handling\nI0408 21:33:38.867963 1785 log.go:172] (0xc000105600) Data frame received for 5\nI0408 21:33:38.867982 1785 log.go:172] (0xc0009ea000) (5) Data frame handling\nI0408 21:33:38.869651 1785 log.go:172] (0xc000105600) Data frame received for 1\nI0408 21:33:38.869687 1785 log.go:172] (0xc000914000) (1) Data frame handling\nI0408 21:33:38.869705 1785 log.go:172] (0xc000914000) (1) Data frame sent\nI0408 21:33:38.869726 1785 log.go:172] (0xc000105600) (0xc000914000) Stream removed, broadcasting: 1\nI0408 21:33:38.869783 1785 log.go:172] (0xc000105600) Go away received\nI0408 21:33:38.870304 1785 log.go:172] (0xc000105600) (0xc000914000) Stream removed, broadcasting: 1\nI0408 21:33:38.870335 1785 log.go:172] (0xc000105600) (0xc0009140a0) Stream removed, broadcasting: 3\nI0408 21:33:38.870355 1785 log.go:172] (0xc000105600) (0xc0009ea000) Stream removed, broadcasting: 5\n" Apr 8 21:33:38.876: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 8 21:33:38.876: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 8 21:33:38.879: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Apr 8 21:33:48.884: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 8 21:33:48.884: INFO: Waiting for statefulset status.replicas updated to 0 Apr 8 21:33:48.898: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999523s Apr 8 21:33:49.902: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.996276956s Apr 8 21:33:50.906: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.992119043s Apr 8 21:33:51.910: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.987560216s Apr 8 21:33:52.915: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.983504543s Apr 8 21:33:53.919: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.978960555s Apr 8 21:33:54.923: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.974552603s Apr 8 21:33:55.927: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.971282812s Apr 8 21:33:56.931: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.967155244s Apr 8 21:33:57.935: INFO: Verifying statefulset ss doesn't scale past 1 for another 962.927536ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-5331 Apr 8 21:33:58.940: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5331 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 8 21:33:59.161: INFO: stderr: "I0408 21:33:59.080209 1806 log.go:172] (0xc0000f56b0) (0xc000a96000) Create stream\nI0408 21:33:59.080269 1806 log.go:172] (0xc0000f56b0) (0xc000a96000) Stream added, broadcasting: 1\nI0408 21:33:59.083194 1806 log.go:172] (0xc0000f56b0) Reply frame received for 1\nI0408 21:33:59.083240 1806 log.go:172] (0xc0000f56b0) (0xc000680000) Create stream\nI0408 21:33:59.083260 1806 log.go:172] (0xc0000f56b0) (0xc000680000) Stream added, broadcasting: 3\nI0408 21:33:59.084336 1806 log.go:172] (0xc0000f56b0) Reply frame received for 3\nI0408 21:33:59.084387 1806 log.go:172] (0xc0000f56b0) (0xc000a960a0) Create stream\nI0408 21:33:59.084399 1806 log.go:172] (0xc0000f56b0) (0xc000a960a0) Stream added, broadcasting: 5\nI0408 21:33:59.085323 1806 log.go:172] (0xc0000f56b0) Reply frame received for 5\nI0408 21:33:59.155678 1806 log.go:172] (0xc0000f56b0) Data frame received for 3\nI0408 21:33:59.155702 1806 log.go:172] (0xc000680000) (3) Data frame handling\nI0408 21:33:59.155715 1806 log.go:172] (0xc000680000) (3) Data frame sent\nI0408 21:33:59.155720 1806 log.go:172] (0xc0000f56b0) Data frame received for 3\nI0408 21:33:59.155724 1806 log.go:172] (0xc000680000) (3) Data frame handling\nI0408 21:33:59.155850 1806 log.go:172] (0xc0000f56b0) Data frame received for 5\nI0408 21:33:59.155867 1806 log.go:172] (0xc000a960a0) (5) Data frame handling\nI0408 21:33:59.155882 1806 log.go:172] (0xc000a960a0) (5) Data frame sent\nI0408 21:33:59.155895 1806 log.go:172] (0xc0000f56b0) Data frame received for 5\nI0408 21:33:59.155905 1806 log.go:172] (0xc000a960a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0408 21:33:59.157600 1806 log.go:172] (0xc0000f56b0) Data frame received for 1\nI0408 21:33:59.157625 1806 log.go:172] (0xc000a96000) (1) Data frame handling\nI0408 21:33:59.157637 1806 log.go:172] (0xc000a96000) (1) Data frame sent\nI0408 21:33:59.157649 1806 log.go:172] (0xc0000f56b0) (0xc000a96000) Stream removed, broadcasting: 1\nI0408 21:33:59.157663 1806 log.go:172] (0xc0000f56b0) Go away received\nI0408 21:33:59.158111 1806 log.go:172] (0xc0000f56b0) (0xc000a96000) Stream removed, broadcasting: 1\nI0408 21:33:59.158138 1806 log.go:172] (0xc0000f56b0) (0xc000680000) Stream removed, broadcasting: 3\nI0408 21:33:59.158151 1806 log.go:172] (0xc0000f56b0) (0xc000a960a0) Stream removed, broadcasting: 5\n" Apr 8 21:33:59.161: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 8 21:33:59.161: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 8 21:33:59.165: INFO: Found 1 stateful pods, waiting for 3 Apr 8 21:34:09.170: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Apr 8 21:34:09.170: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Apr 8 21:34:09.170: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Apr 8 21:34:09.176: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5331 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 8 21:34:09.399: INFO: stderr: "I0408 21:34:09.304184 1829 log.go:172] (0xc000ad4c60) (0xc0008fa280) Create stream\nI0408 21:34:09.304265 1829 log.go:172] (0xc000ad4c60) (0xc0008fa280) Stream added, broadcasting: 1\nI0408 21:34:09.308210 1829 log.go:172] (0xc000ad4c60) Reply frame received for 1\nI0408 21:34:09.308285 1829 log.go:172] (0xc000ad4c60) (0xc000a10000) Create stream\nI0408 21:34:09.308354 1829 log.go:172] (0xc000ad4c60) (0xc000a10000) Stream added, broadcasting: 3\nI0408 21:34:09.311108 1829 log.go:172] (0xc000ad4c60) Reply frame received for 3\nI0408 21:34:09.311137 1829 log.go:172] (0xc000ad4c60) (0xc0008fa0a0) Create stream\nI0408 21:34:09.311146 1829 log.go:172] (0xc000ad4c60) (0xc0008fa0a0) Stream added, broadcasting: 5\nI0408 21:34:09.312154 1829 log.go:172] (0xc000ad4c60) Reply frame received for 5\nI0408 21:34:09.392605 1829 log.go:172] (0xc000ad4c60) Data frame received for 5\nI0408 21:34:09.392641 1829 log.go:172] (0xc0008fa0a0) (5) Data frame handling\nI0408 21:34:09.392653 1829 log.go:172] (0xc0008fa0a0) (5) Data frame sent\nI0408 21:34:09.392659 1829 log.go:172] (0xc000ad4c60) Data frame received for 5\nI0408 21:34:09.392664 1829 log.go:172] (0xc0008fa0a0) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0408 21:34:09.392683 1829 log.go:172] (0xc000ad4c60) Data frame received for 3\nI0408 21:34:09.392688 1829 log.go:172] (0xc000a10000) (3) Data frame handling\nI0408 21:34:09.392694 1829 log.go:172] (0xc000a10000) (3) Data frame sent\nI0408 21:34:09.392699 1829 log.go:172] (0xc000ad4c60) Data frame received for 3\nI0408 21:34:09.392703 1829 log.go:172] (0xc000a10000) (3) Data frame handling\nI0408 21:34:09.394460 1829 log.go:172] (0xc000ad4c60) Data frame received for 1\nI0408 21:34:09.394476 1829 log.go:172] (0xc0008fa280) (1) Data frame handling\nI0408 21:34:09.394482 1829 log.go:172] (0xc0008fa280) (1) Data frame sent\nI0408 21:34:09.394490 1829 log.go:172] (0xc000ad4c60) (0xc0008fa280) Stream removed, broadcasting: 1\nI0408 21:34:09.394538 1829 log.go:172] (0xc000ad4c60) Go away received\nI0408 21:34:09.394707 1829 log.go:172] (0xc000ad4c60) (0xc0008fa280) Stream removed, broadcasting: 1\nI0408 21:34:09.394719 1829 log.go:172] (0xc000ad4c60) (0xc000a10000) Stream removed, broadcasting: 3\nI0408 21:34:09.394725 1829 log.go:172] (0xc000ad4c60) (0xc0008fa0a0) Stream removed, broadcasting: 5\n" Apr 8 21:34:09.399: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 8 21:34:09.399: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 8 21:34:09.399: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5331 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 8 21:34:09.655: INFO: stderr: "I0408 21:34:09.534965 1852 log.go:172] (0xc000bf9080) (0xc000c50320) Create stream\nI0408 21:34:09.535022 1852 log.go:172] (0xc000bf9080) (0xc000c50320) Stream added, broadcasting: 1\nI0408 21:34:09.537391 1852 log.go:172] (0xc000bf9080) Reply frame received for 1\nI0408 21:34:09.537455 1852 log.go:172] (0xc000bf9080) (0xc000a78280) Create stream\nI0408 21:34:09.537491 1852 log.go:172] (0xc000bf9080) (0xc000a78280) Stream added, broadcasting: 3\nI0408 21:34:09.538681 1852 log.go:172] (0xc000bf9080) Reply frame received for 3\nI0408 21:34:09.538713 1852 log.go:172] (0xc000bf9080) (0xc000c503c0) Create stream\nI0408 21:34:09.538722 1852 log.go:172] (0xc000bf9080) (0xc000c503c0) Stream added, broadcasting: 5\nI0408 21:34:09.539723 1852 log.go:172] (0xc000bf9080) Reply frame received for 5\nI0408 21:34:09.616042 1852 log.go:172] (0xc000bf9080) Data frame received for 5\nI0408 21:34:09.616070 1852 log.go:172] (0xc000c503c0) (5) Data frame handling\nI0408 21:34:09.616084 1852 log.go:172] (0xc000c503c0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0408 21:34:09.647296 1852 log.go:172] (0xc000bf9080) Data frame received for 5\nI0408 21:34:09.647380 1852 log.go:172] (0xc000c503c0) (5) Data frame handling\nI0408 21:34:09.647443 1852 log.go:172] (0xc000bf9080) Data frame received for 3\nI0408 21:34:09.647520 1852 log.go:172] (0xc000a78280) (3) Data frame handling\nI0408 21:34:09.647573 1852 log.go:172] (0xc000a78280) (3) Data frame sent\nI0408 21:34:09.647616 1852 log.go:172] (0xc000bf9080) Data frame received for 3\nI0408 21:34:09.647636 1852 log.go:172] (0xc000a78280) (3) Data frame handling\nI0408 21:34:09.649993 1852 log.go:172] (0xc000bf9080) Data frame received for 1\nI0408 21:34:09.650015 1852 log.go:172] (0xc000c50320) (1) Data frame handling\nI0408 21:34:09.650030 1852 log.go:172] (0xc000c50320) (1) Data frame sent\nI0408 21:34:09.650052 1852 log.go:172] (0xc000bf9080) (0xc000c50320) Stream removed, broadcasting: 1\nI0408 21:34:09.650070 1852 log.go:172] (0xc000bf9080) Go away received\nI0408 21:34:09.650484 1852 log.go:172] (0xc000bf9080) (0xc000c50320) Stream removed, broadcasting: 1\nI0408 21:34:09.650504 1852 log.go:172] (0xc000bf9080) (0xc000a78280) Stream removed, broadcasting: 3\nI0408 21:34:09.650518 1852 log.go:172] (0xc000bf9080) (0xc000c503c0) Stream removed, broadcasting: 5\n" Apr 8 21:34:09.655: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 8 21:34:09.655: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 8 21:34:09.655: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5331 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 8 21:34:09.890: INFO: stderr: "I0408 21:34:09.784641 1872 log.go:172] (0xc00065cb00) (0xc00063e000) Create stream\nI0408 21:34:09.784690 1872 log.go:172] (0xc00065cb00) (0xc00063e000) Stream added, broadcasting: 1\nI0408 21:34:09.787785 1872 log.go:172] (0xc00065cb00) Reply frame received for 1\nI0408 21:34:09.787824 1872 log.go:172] (0xc00065cb00) (0xc000647a40) Create stream\nI0408 21:34:09.787835 1872 log.go:172] (0xc00065cb00) (0xc000647a40) Stream added, broadcasting: 3\nI0408 21:34:09.788822 1872 log.go:172] (0xc00065cb00) Reply frame received for 3\nI0408 21:34:09.788850 1872 log.go:172] (0xc00065cb00) (0xc00063e140) Create stream\nI0408 21:34:09.788859 1872 log.go:172] (0xc00065cb00) (0xc00063e140) Stream added, broadcasting: 5\nI0408 21:34:09.789782 1872 log.go:172] (0xc00065cb00) Reply frame received for 5\nI0408 21:34:09.854317 1872 log.go:172] (0xc00065cb00) Data frame received for 5\nI0408 21:34:09.854350 1872 log.go:172] (0xc00063e140) (5) Data frame handling\nI0408 21:34:09.854376 1872 log.go:172] (0xc00063e140) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0408 21:34:09.882011 1872 log.go:172] (0xc00065cb00) Data frame received for 3\nI0408 21:34:09.882036 1872 log.go:172] (0xc000647a40) (3) Data frame handling\nI0408 21:34:09.882064 1872 log.go:172] (0xc000647a40) (3) Data frame sent\nI0408 21:34:09.882263 1872 log.go:172] (0xc00065cb00) Data frame received for 5\nI0408 21:34:09.882275 1872 log.go:172] (0xc00063e140) (5) Data frame handling\nI0408 21:34:09.882502 1872 log.go:172] (0xc00065cb00) Data frame received for 3\nI0408 21:34:09.882535 1872 log.go:172] (0xc000647a40) (3) Data frame handling\nI0408 21:34:09.884911 1872 log.go:172] (0xc00065cb00) Data frame received for 1\nI0408 21:34:09.884924 1872 log.go:172] (0xc00063e000) (1) Data frame handling\nI0408 21:34:09.884930 1872 log.go:172] (0xc00063e000) (1) Data frame sent\nI0408 21:34:09.884937 1872 log.go:172] (0xc00065cb00) (0xc00063e000) Stream removed, broadcasting: 1\nI0408 21:34:09.885271 1872 log.go:172] (0xc00065cb00) (0xc00063e000) Stream removed, broadcasting: 1\nI0408 21:34:09.885295 1872 log.go:172] (0xc00065cb00) (0xc000647a40) Stream removed, broadcasting: 3\nI0408 21:34:09.885394 1872 log.go:172] (0xc00065cb00) Go away received\nI0408 21:34:09.885434 1872 log.go:172] (0xc00065cb00) (0xc00063e140) Stream removed, broadcasting: 5\n" Apr 8 21:34:09.890: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 8 21:34:09.890: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 8 21:34:09.890: INFO: Waiting for statefulset status.replicas updated to 0 Apr 8 21:34:09.894: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 Apr 8 21:34:19.903: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 8 21:34:19.903: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Apr 8 21:34:19.903: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Apr 8 21:34:19.916: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.99999951s Apr 8 21:34:20.921: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.993244017s Apr 8 21:34:21.926: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.988254424s Apr 8 21:34:22.931: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.983197417s Apr 8 21:34:23.936: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.9785717s Apr 8 21:34:24.941: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.97334349s Apr 8 21:34:25.947: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.968286985s Apr 8 21:34:26.952: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.96306974s Apr 8 21:34:27.957: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.95752156s Apr 8 21:34:28.967: INFO: Verifying statefulset ss doesn't scale past 3 for another 952.380587ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-5331 Apr 8 21:34:29.973: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5331 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 8 21:34:32.661: INFO: stderr: "I0408 21:34:32.560977 1893 log.go:172] (0xc0001058c0) (0xc0006da000) Create stream\nI0408 21:34:32.561022 1893 log.go:172] (0xc0001058c0) (0xc0006da000) Stream added, broadcasting: 1\nI0408 21:34:32.564308 1893 log.go:172] (0xc0001058c0) Reply frame received for 1\nI0408 21:34:32.564372 1893 log.go:172] (0xc0001058c0) (0xc000726000) Create stream\nI0408 21:34:32.564384 1893 log.go:172] (0xc0001058c0) (0xc000726000) Stream added, broadcasting: 3\nI0408 21:34:32.565413 1893 log.go:172] (0xc0001058c0) Reply frame received for 3\nI0408 21:34:32.565452 1893 log.go:172] (0xc0001058c0) (0xc00076c000) Create stream\nI0408 21:34:32.565460 1893 log.go:172] (0xc0001058c0) (0xc00076c000) Stream added, broadcasting: 5\nI0408 21:34:32.566176 1893 log.go:172] (0xc0001058c0) Reply frame received for 5\nI0408 21:34:32.653360 1893 log.go:172] (0xc0001058c0) Data frame received for 5\nI0408 21:34:32.653411 1893 log.go:172] (0xc00076c000) (5) Data frame handling\nI0408 21:34:32.653445 1893 log.go:172] (0xc00076c000) (5) Data frame sent\nI0408 21:34:32.653469 1893 log.go:172] (0xc0001058c0) Data frame received for 5\nI0408 21:34:32.653484 1893 log.go:172] (0xc00076c000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0408 21:34:32.653519 1893 log.go:172] (0xc0001058c0) Data frame received for 3\nI0408 21:34:32.653539 1893 log.go:172] (0xc000726000) (3) Data frame handling\nI0408 21:34:32.653564 1893 log.go:172] (0xc000726000) (3) Data frame sent\nI0408 21:34:32.653581 1893 log.go:172] (0xc0001058c0) Data frame received for 3\nI0408 21:34:32.653590 1893 log.go:172] (0xc000726000) (3) Data frame handling\nI0408 21:34:32.655012 1893 log.go:172] (0xc0001058c0) Data frame received for 1\nI0408 21:34:32.655026 1893 log.go:172] (0xc0006da000) (1) Data frame handling\nI0408 21:34:32.655039 1893 log.go:172] (0xc0006da000) (1) Data frame sent\nI0408 21:34:32.655045 1893 log.go:172] (0xc0001058c0) (0xc0006da000) Stream removed, broadcasting: 1\nI0408 21:34:32.655254 1893 log.go:172] (0xc0001058c0) Go away received\nI0408 21:34:32.655346 1893 log.go:172] (0xc0001058c0) (0xc0006da000) Stream removed, broadcasting: 1\nI0408 21:34:32.655360 1893 log.go:172] (0xc0001058c0) (0xc000726000) Stream removed, broadcasting: 3\nI0408 21:34:32.655366 1893 log.go:172] (0xc0001058c0) (0xc00076c000) Stream removed, broadcasting: 5\n" Apr 8 21:34:32.661: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 8 21:34:32.661: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 8 21:34:32.661: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5331 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 8 21:34:32.853: INFO: stderr: "I0408 21:34:32.780598 1924 log.go:172] (0xc0007bc840) (0xc0007ae320) Create stream\nI0408 21:34:32.780654 1924 log.go:172] (0xc0007bc840) (0xc0007ae320) Stream added, broadcasting: 1\nI0408 21:34:32.783409 1924 log.go:172] (0xc0007bc840) Reply frame received for 1\nI0408 21:34:32.783453 1924 log.go:172] (0xc0007bc840) (0xc0005965a0) Create stream\nI0408 21:34:32.783467 1924 log.go:172] (0xc0007bc840) (0xc0005965a0) Stream added, broadcasting: 3\nI0408 21:34:32.784388 1924 log.go:172] (0xc0007bc840) Reply frame received for 3\nI0408 21:34:32.784418 1924 log.go:172] (0xc0007bc840) (0xc0007ae3c0) Create stream\nI0408 21:34:32.784430 1924 log.go:172] (0xc0007bc840) (0xc0007ae3c0) Stream added, broadcasting: 5\nI0408 21:34:32.785542 1924 log.go:172] (0xc0007bc840) Reply frame received for 5\nI0408 21:34:32.845932 1924 log.go:172] (0xc0007bc840) Data frame received for 3\nI0408 21:34:32.846063 1924 log.go:172] (0xc0005965a0) (3) Data frame handling\nI0408 21:34:32.846085 1924 log.go:172] (0xc0005965a0) (3) Data frame sent\nI0408 21:34:32.846102 1924 log.go:172] (0xc0007bc840) Data frame received for 3\nI0408 21:34:32.846111 1924 log.go:172] (0xc0005965a0) (3) Data frame handling\nI0408 21:34:32.846125 1924 log.go:172] (0xc0007bc840) Data frame received for 5\nI0408 21:34:32.846139 1924 log.go:172] (0xc0007ae3c0) (5) Data frame handling\nI0408 21:34:32.846150 1924 log.go:172] (0xc0007ae3c0) (5) Data frame sent\nI0408 21:34:32.846159 1924 log.go:172] (0xc0007bc840) Data frame received for 5\nI0408 21:34:32.846166 1924 log.go:172] (0xc0007ae3c0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0408 21:34:32.847955 1924 log.go:172] (0xc0007bc840) Data frame received for 1\nI0408 21:34:32.847980 1924 log.go:172] (0xc0007ae320) (1) Data frame handling\nI0408 21:34:32.847992 1924 log.go:172] (0xc0007ae320) (1) Data frame sent\nI0408 21:34:32.848006 1924 log.go:172] (0xc0007bc840) (0xc0007ae320) Stream removed, broadcasting: 1\nI0408 21:34:32.848318 1924 log.go:172] (0xc0007bc840) (0xc0007ae320) Stream removed, broadcasting: 1\nI0408 21:34:32.848331 1924 log.go:172] (0xc0007bc840) (0xc0005965a0) Stream removed, broadcasting: 3\nI0408 21:34:32.848449 1924 log.go:172] (0xc0007bc840) (0xc0007ae3c0) Stream removed, broadcasting: 5\n" Apr 8 21:34:32.853: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 8 21:34:32.853: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 8 21:34:32.854: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5331 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 8 21:34:33.025: INFO: stderr: "I0408 21:34:32.972684 1944 log.go:172] (0xc0003dadc0) (0xc0005df9a0) Create stream\nI0408 21:34:32.972728 1944 log.go:172] (0xc0003dadc0) (0xc0005df9a0) Stream added, broadcasting: 1\nI0408 21:34:32.974989 1944 log.go:172] (0xc0003dadc0) Reply frame received for 1\nI0408 21:34:32.975017 1944 log.go:172] (0xc0003dadc0) (0xc0008ea000) Create stream\nI0408 21:34:32.975025 1944 log.go:172] (0xc0003dadc0) (0xc0008ea000) Stream added, broadcasting: 3\nI0408 21:34:32.975866 1944 log.go:172] (0xc0003dadc0) Reply frame received for 3\nI0408 21:34:32.975903 1944 log.go:172] (0xc0003dadc0) (0xc0005dfb80) Create stream\nI0408 21:34:32.975923 1944 log.go:172] (0xc0003dadc0) (0xc0005dfb80) Stream added, broadcasting: 5\nI0408 21:34:32.976770 1944 log.go:172] (0xc0003dadc0) Reply frame received for 5\nI0408 21:34:33.018553 1944 log.go:172] (0xc0003dadc0) Data frame received for 5\nI0408 21:34:33.018605 1944 log.go:172] (0xc0005dfb80) (5) Data frame handling\nI0408 21:34:33.018635 1944 log.go:172] (0xc0005dfb80) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0408 21:34:33.018671 1944 log.go:172] (0xc0003dadc0) Data frame received for 3\nI0408 21:34:33.018718 1944 log.go:172] (0xc0008ea000) (3) Data frame handling\nI0408 21:34:33.018748 1944 log.go:172] (0xc0008ea000) (3) Data frame sent\nI0408 21:34:33.018808 1944 log.go:172] (0xc0003dadc0) Data frame received for 3\nI0408 21:34:33.018831 1944 log.go:172] (0xc0008ea000) (3) Data frame handling\nI0408 21:34:33.018879 1944 log.go:172] (0xc0003dadc0) Data frame received for 5\nI0408 21:34:33.018915 1944 log.go:172] (0xc0005dfb80) (5) Data frame handling\nI0408 21:34:33.020531 1944 log.go:172] (0xc0003dadc0) Data frame received for 1\nI0408 21:34:33.020548 1944 log.go:172] (0xc0005df9a0) (1) Data frame handling\nI0408 21:34:33.020557 1944 log.go:172] (0xc0005df9a0) (1) Data frame sent\nI0408 21:34:33.020570 1944 log.go:172] (0xc0003dadc0) (0xc0005df9a0) Stream removed, broadcasting: 1\nI0408 21:34:33.020585 1944 log.go:172] (0xc0003dadc0) Go away received\nI0408 21:34:33.021046 1944 log.go:172] (0xc0003dadc0) (0xc0005df9a0) Stream removed, broadcasting: 1\nI0408 21:34:33.021069 1944 log.go:172] (0xc0003dadc0) (0xc0008ea000) Stream removed, broadcasting: 3\nI0408 21:34:33.021079 1944 log.go:172] (0xc0003dadc0) (0xc0005dfb80) Stream removed, broadcasting: 5\n" Apr 8 21:34:33.025: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 8 21:34:33.025: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 8 21:34:33.025: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Apr 8 21:34:43.064: INFO: Deleting all statefulset in ns statefulset-5331 Apr 8 21:34:43.067: INFO: Scaling statefulset ss to 0 Apr 8 21:34:43.077: INFO: Waiting for statefulset status.replicas updated to 0 Apr 8 21:34:43.079: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:34:43.095: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-5331" for this suite. • [SLOW TEST:74.619 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":278,"completed":118,"skipped":1871,"failed":0} SSSS ------------------------------ [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:34:43.102: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 8 21:34:43.186: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' Apr 8 21:34:43.325: INFO: stderr: "" Apr 8 21:34:43.325: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.4\", GitCommit:\"8d8aa39598534325ad77120c120a22b3a990b5ea\", GitTreeState:\"clean\", BuildDate:\"2020-04-05T10:48:13Z\", GoVersion:\"go1.13.9\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.2\", GitCommit:\"59603c6e503c87169aea6106f57b9f242f64df89\", GitTreeState:\"clean\", BuildDate:\"2020-02-07T01:05:17Z\", GoVersion:\"go1.13.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:34:43.325: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7706" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":278,"completed":119,"skipped":1875,"failed":0} SSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:34:43.334: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on node default medium Apr 8 21:34:43.430: INFO: Waiting up to 5m0s for pod "pod-80d99a1e-3a6e-44f1-8ac5-3bf239f03a68" in namespace "emptydir-4187" to be "success or failure" Apr 8 21:34:43.434: INFO: Pod "pod-80d99a1e-3a6e-44f1-8ac5-3bf239f03a68": Phase="Pending", Reason="", readiness=false. Elapsed: 4.478388ms Apr 8 21:34:45.438: INFO: Pod "pod-80d99a1e-3a6e-44f1-8ac5-3bf239f03a68": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008638993s Apr 8 21:34:47.443: INFO: Pod "pod-80d99a1e-3a6e-44f1-8ac5-3bf239f03a68": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01355112s STEP: Saw pod success Apr 8 21:34:47.443: INFO: Pod "pod-80d99a1e-3a6e-44f1-8ac5-3bf239f03a68" satisfied condition "success or failure" Apr 8 21:34:47.447: INFO: Trying to get logs from node jerma-worker pod pod-80d99a1e-3a6e-44f1-8ac5-3bf239f03a68 container test-container: STEP: delete the pod Apr 8 21:34:47.509: INFO: Waiting for pod pod-80d99a1e-3a6e-44f1-8ac5-3bf239f03a68 to disappear Apr 8 21:34:47.533: INFO: Pod pod-80d99a1e-3a6e-44f1-8ac5-3bf239f03a68 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:34:47.533: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4187" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":120,"skipped":1881,"failed":0} ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:34:47.540: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Apr 8 21:34:47.593: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 8 21:34:47.640: INFO: Waiting for terminating namespaces to be deleted... Apr 8 21:34:47.642: INFO: Logging pods the kubelet thinks is on node jerma-worker before test Apr 8 21:34:47.646: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 8 21:34:47.646: INFO: Container kindnet-cni ready: true, restart count 0 Apr 8 21:34:47.647: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 8 21:34:47.647: INFO: Container kube-proxy ready: true, restart count 0 Apr 8 21:34:47.647: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test Apr 8 21:34:47.667: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 8 21:34:47.667: INFO: Container kube-proxy ready: true, restart count 0 Apr 8 21:34:47.667: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) Apr 8 21:34:47.667: INFO: Container kube-hunter ready: false, restart count 0 Apr 8 21:34:47.667: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 8 21:34:47.667: INFO: Container kindnet-cni ready: true, restart count 0 Apr 8 21:34:47.667: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) Apr 8 21:34:47.667: INFO: Container kube-bench ready: false, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.1603f5bd6c9e2685], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:34:48.689: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-9558" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":278,"completed":121,"skipped":1881,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:34:48.699: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: set up a multi version CRD Apr 8 21:34:48.745: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:35:05.969: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-8121" for this suite. • [SLOW TEST:17.277 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":278,"completed":122,"skipped":1904,"failed":0} [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:35:05.976: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:35:12.239: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-3443" for this suite. STEP: Destroying namespace "nsdeletetest-952" for this suite. Apr 8 21:35:12.255: INFO: Namespace nsdeletetest-952 was already deleted STEP: Destroying namespace "nsdeletetest-8898" for this suite. • [SLOW TEST:6.282 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":278,"completed":123,"skipped":1904,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:35:12.259: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 8 21:35:12.318: INFO: Waiting up to 5m0s for pod "downwardapi-volume-070bdca0-ac07-45c6-ba43-8a72d1ce7414" in namespace "downward-api-415" to be "success or failure" Apr 8 21:35:12.334: INFO: Pod "downwardapi-volume-070bdca0-ac07-45c6-ba43-8a72d1ce7414": Phase="Pending", Reason="", readiness=false. Elapsed: 16.212851ms Apr 8 21:35:14.338: INFO: Pod "downwardapi-volume-070bdca0-ac07-45c6-ba43-8a72d1ce7414": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020043297s Apr 8 21:35:16.342: INFO: Pod "downwardapi-volume-070bdca0-ac07-45c6-ba43-8a72d1ce7414": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024056739s STEP: Saw pod success Apr 8 21:35:16.342: INFO: Pod "downwardapi-volume-070bdca0-ac07-45c6-ba43-8a72d1ce7414" satisfied condition "success or failure" Apr 8 21:35:16.363: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-070bdca0-ac07-45c6-ba43-8a72d1ce7414 container client-container: STEP: delete the pod Apr 8 21:35:16.380: INFO: Waiting for pod downwardapi-volume-070bdca0-ac07-45c6-ba43-8a72d1ce7414 to disappear Apr 8 21:35:16.385: INFO: Pod downwardapi-volume-070bdca0-ac07-45c6-ba43-8a72d1ce7414 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:35:16.385: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-415" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":124,"skipped":1942,"failed":0} S ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:35:16.392: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0408 21:35:46.987389 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 8 21:35:46.987: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:35:46.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3134" for this suite. • [SLOW TEST:30.603 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":278,"completed":125,"skipped":1943,"failed":0} S ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:35:46.995: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Apr 8 21:35:47.074: INFO: Waiting up to 5m0s for pod "downward-api-4995953d-cfe8-433e-aeb2-6885b45f7559" in namespace "downward-api-4783" to be "success or failure" Apr 8 21:35:47.091: INFO: Pod "downward-api-4995953d-cfe8-433e-aeb2-6885b45f7559": Phase="Pending", Reason="", readiness=false. Elapsed: 17.42127ms Apr 8 21:35:49.095: INFO: Pod "downward-api-4995953d-cfe8-433e-aeb2-6885b45f7559": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020972941s Apr 8 21:35:51.099: INFO: Pod "downward-api-4995953d-cfe8-433e-aeb2-6885b45f7559": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025558139s STEP: Saw pod success Apr 8 21:35:51.099: INFO: Pod "downward-api-4995953d-cfe8-433e-aeb2-6885b45f7559" satisfied condition "success or failure" Apr 8 21:35:51.107: INFO: Trying to get logs from node jerma-worker pod downward-api-4995953d-cfe8-433e-aeb2-6885b45f7559 container dapi-container: STEP: delete the pod Apr 8 21:35:51.231: INFO: Waiting for pod downward-api-4995953d-cfe8-433e-aeb2-6885b45f7559 to disappear Apr 8 21:35:51.235: INFO: Pod downward-api-4995953d-cfe8-433e-aeb2-6885b45f7559 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:35:51.235: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4783" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":278,"completed":126,"skipped":1944,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:35:51.243: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods Apr 8 21:35:55.897: INFO: Successfully updated pod "adopt-release-lv5rk" STEP: Checking that the Job readopts the Pod Apr 8 21:35:55.897: INFO: Waiting up to 15m0s for pod "adopt-release-lv5rk" in namespace "job-1301" to be "adopted" Apr 8 21:35:55.926: INFO: Pod "adopt-release-lv5rk": Phase="Running", Reason="", readiness=true. Elapsed: 29.189052ms Apr 8 21:35:57.944: INFO: Pod "adopt-release-lv5rk": Phase="Running", Reason="", readiness=true. Elapsed: 2.047179149s Apr 8 21:35:57.944: INFO: Pod "adopt-release-lv5rk" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod Apr 8 21:35:58.453: INFO: Successfully updated pod "adopt-release-lv5rk" STEP: Checking that the Job releases the Pod Apr 8 21:35:58.454: INFO: Waiting up to 15m0s for pod "adopt-release-lv5rk" in namespace "job-1301" to be "released" Apr 8 21:35:58.458: INFO: Pod "adopt-release-lv5rk": Phase="Running", Reason="", readiness=true. Elapsed: 3.894899ms Apr 8 21:36:00.514: INFO: Pod "adopt-release-lv5rk": Phase="Running", Reason="", readiness=true. Elapsed: 2.060038842s Apr 8 21:36:00.514: INFO: Pod "adopt-release-lv5rk" satisfied condition "released" [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:36:00.514: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-1301" for this suite. • [SLOW TEST:9.280 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":278,"completed":127,"skipped":1970,"failed":0} S ------------------------------ [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:36:00.523: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating Agnhost RC Apr 8 21:36:00.652: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7588' Apr 8 21:36:00.938: INFO: stderr: "" Apr 8 21:36:00.938: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Apr 8 21:36:01.943: INFO: Selector matched 1 pods for map[app:agnhost] Apr 8 21:36:01.943: INFO: Found 0 / 1 Apr 8 21:36:02.946: INFO: Selector matched 1 pods for map[app:agnhost] Apr 8 21:36:02.946: INFO: Found 0 / 1 Apr 8 21:36:03.943: INFO: Selector matched 1 pods for map[app:agnhost] Apr 8 21:36:03.943: INFO: Found 1 / 1 Apr 8 21:36:03.943: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Apr 8 21:36:03.946: INFO: Selector matched 1 pods for map[app:agnhost] Apr 8 21:36:03.946: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Apr 8 21:36:03.946: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod agnhost-master-bhmlk --namespace=kubectl-7588 -p {"metadata":{"annotations":{"x":"y"}}}' Apr 8 21:36:04.047: INFO: stderr: "" Apr 8 21:36:04.047: INFO: stdout: "pod/agnhost-master-bhmlk patched\n" STEP: checking annotations Apr 8 21:36:04.050: INFO: Selector matched 1 pods for map[app:agnhost] Apr 8 21:36:04.050: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:36:04.050: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7588" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":278,"completed":128,"skipped":1971,"failed":0} SSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:36:04.056: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 8 21:36:04.627: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 8 21:36:06.680: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721978564, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721978564, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721978564, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721978564, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 8 21:36:09.753: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:36:10.059: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6006" for this suite. STEP: Destroying namespace "webhook-6006-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.212 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":278,"completed":129,"skipped":1974,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:36:10.269: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: getting the auto-created API token Apr 8 21:36:10.867: INFO: created pod pod-service-account-defaultsa Apr 8 21:36:10.867: INFO: pod pod-service-account-defaultsa service account token volume mount: true Apr 8 21:36:10.873: INFO: created pod pod-service-account-mountsa Apr 8 21:36:10.873: INFO: pod pod-service-account-mountsa service account token volume mount: true Apr 8 21:36:10.884: INFO: created pod pod-service-account-nomountsa Apr 8 21:36:10.884: INFO: pod pod-service-account-nomountsa service account token volume mount: false Apr 8 21:36:11.003: INFO: created pod pod-service-account-defaultsa-mountspec Apr 8 21:36:11.003: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Apr 8 21:36:11.051: INFO: created pod pod-service-account-mountsa-mountspec Apr 8 21:36:11.051: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Apr 8 21:36:11.167: INFO: created pod pod-service-account-nomountsa-mountspec Apr 8 21:36:11.167: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Apr 8 21:36:11.406: INFO: created pod pod-service-account-defaultsa-nomountspec Apr 8 21:36:11.406: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Apr 8 21:36:11.410: INFO: created pod pod-service-account-mountsa-nomountspec Apr 8 21:36:11.410: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Apr 8 21:36:11.437: INFO: created pod pod-service-account-nomountsa-nomountspec Apr 8 21:36:11.437: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:36:11.437: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-1433" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":278,"completed":130,"skipped":1996,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:36:11.610: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test substitution in container's command Apr 8 21:36:11.725: INFO: Waiting up to 5m0s for pod "var-expansion-0ee2955f-da26-44d3-acd8-f23c56f40e36" in namespace "var-expansion-8767" to be "success or failure" Apr 8 21:36:11.727: INFO: Pod "var-expansion-0ee2955f-da26-44d3-acd8-f23c56f40e36": Phase="Pending", Reason="", readiness=false. Elapsed: 2.397404ms Apr 8 21:36:13.731: INFO: Pod "var-expansion-0ee2955f-da26-44d3-acd8-f23c56f40e36": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006139584s Apr 8 21:36:15.891: INFO: Pod "var-expansion-0ee2955f-da26-44d3-acd8-f23c56f40e36": Phase="Pending", Reason="", readiness=false. Elapsed: 4.165988644s Apr 8 21:36:17.977: INFO: Pod "var-expansion-0ee2955f-da26-44d3-acd8-f23c56f40e36": Phase="Pending", Reason="", readiness=false. Elapsed: 6.252202452s Apr 8 21:36:20.095: INFO: Pod "var-expansion-0ee2955f-da26-44d3-acd8-f23c56f40e36": Phase="Pending", Reason="", readiness=false. Elapsed: 8.369793325s Apr 8 21:36:22.099: INFO: Pod "var-expansion-0ee2955f-da26-44d3-acd8-f23c56f40e36": Phase="Running", Reason="", readiness=true. Elapsed: 10.373740048s Apr 8 21:36:24.103: INFO: Pod "var-expansion-0ee2955f-da26-44d3-acd8-f23c56f40e36": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.378380875s STEP: Saw pod success Apr 8 21:36:24.103: INFO: Pod "var-expansion-0ee2955f-da26-44d3-acd8-f23c56f40e36" satisfied condition "success or failure" Apr 8 21:36:24.107: INFO: Trying to get logs from node jerma-worker pod var-expansion-0ee2955f-da26-44d3-acd8-f23c56f40e36 container dapi-container: STEP: delete the pod Apr 8 21:36:24.171: INFO: Waiting for pod var-expansion-0ee2955f-da26-44d3-acd8-f23c56f40e36 to disappear Apr 8 21:36:24.183: INFO: Pod var-expansion-0ee2955f-da26-44d3-acd8-f23c56f40e36 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:36:24.183: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-8767" for this suite. • [SLOW TEST:12.582 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":278,"completed":131,"skipped":2055,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:36:24.192: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:36:24.368: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9336" for this suite. •{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":278,"completed":132,"skipped":2088,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:36:24.377: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 8 21:36:24.981: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 8 21:36:26.990: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721978584, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721978584, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721978585, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721978584, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 8 21:36:30.044: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:36:40.190: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5486" for this suite. STEP: Destroying namespace "webhook-5486-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:15.899 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":278,"completed":133,"skipped":2094,"failed":0} SS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:36:40.276: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 8 21:36:40.382: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Apr 8 21:36:45.412: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Apr 8 21:36:45.412: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Apr 8 21:36:49.500: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-4342 /apis/apps/v1/namespaces/deployment-4342/deployments/test-cleanup-deployment 3fe4e5d8-9f9f-48a4-8f1d-0902ebc77aed 6511712 1 2020-04-08 21:36:45 +0000 UTC map[name:cleanup-pod] map[deployment.kubernetes.io/revision:1] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004bb6688 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-04-08 21:36:45 +0000 UTC,LastTransitionTime:2020-04-08 21:36:45 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-cleanup-deployment-55ffc6b7b6" has successfully progressed.,LastUpdateTime:2020-04-08 21:36:48 +0000 UTC,LastTransitionTime:2020-04-08 21:36:45 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Apr 8 21:36:49.505: INFO: New ReplicaSet "test-cleanup-deployment-55ffc6b7b6" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6 deployment-4342 /apis/apps/v1/namespaces/deployment-4342/replicasets/test-cleanup-deployment-55ffc6b7b6 64a6eb57-92d5-41af-82a3-d7926cc44bb7 6511701 1 2020-04-08 21:36:45 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment 3fe4e5d8-9f9f-48a4-8f1d-0902ebc77aed 0xc003580077 0xc003580078}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55ffc6b7b6,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0035800e8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Apr 8 21:36:49.517: INFO: Pod "test-cleanup-deployment-55ffc6b7b6-jzdsn" is available: &Pod{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6-jzdsn test-cleanup-deployment-55ffc6b7b6- deployment-4342 /api/v1/namespaces/deployment-4342/pods/test-cleanup-deployment-55ffc6b7b6-jzdsn 2ef67b93-39c9-4ef1-9868-29836ab113aa 6511700 0 2020-04-08 21:36:45 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-55ffc6b7b6 64a6eb57-92d5-41af-82a3-d7926cc44bb7 0xc003580467 0xc003580468}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tzq9q,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tzq9q,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tzq9q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 21:36:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 21:36:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 21:36:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 21:36:45 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.218,StartTime:2020-04-08 21:36:45 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-08 21:36:47 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://6f8c510ad229f096ec64669c5f56c7cbbd42ceaf7a3a29f08333d0ca45a163dd,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.218,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:36:49.517: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-4342" for this suite. • [SLOW TEST:9.249 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":278,"completed":134,"skipped":2096,"failed":0} [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:36:49.525: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:36:53.676: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-4776" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":278,"completed":135,"skipped":2096,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:36:53.717: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-13b34dcb-6297-4de2-b2f0-b549db5abd71 STEP: Creating a pod to test consume configMaps Apr 8 21:36:53.798: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-a1190df4-9475-4d91-bc4b-22b16fa8c013" in namespace "projected-5853" to be "success or failure" Apr 8 21:36:53.904: INFO: Pod "pod-projected-configmaps-a1190df4-9475-4d91-bc4b-22b16fa8c013": Phase="Pending", Reason="", readiness=false. Elapsed: 105.76502ms Apr 8 21:36:55.945: INFO: Pod "pod-projected-configmaps-a1190df4-9475-4d91-bc4b-22b16fa8c013": Phase="Pending", Reason="", readiness=false. Elapsed: 2.147463085s Apr 8 21:36:57.949: INFO: Pod "pod-projected-configmaps-a1190df4-9475-4d91-bc4b-22b16fa8c013": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.151186479s STEP: Saw pod success Apr 8 21:36:57.949: INFO: Pod "pod-projected-configmaps-a1190df4-9475-4d91-bc4b-22b16fa8c013" satisfied condition "success or failure" Apr 8 21:36:57.951: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-a1190df4-9475-4d91-bc4b-22b16fa8c013 container projected-configmap-volume-test: STEP: delete the pod Apr 8 21:36:57.996: INFO: Waiting for pod pod-projected-configmaps-a1190df4-9475-4d91-bc4b-22b16fa8c013 to disappear Apr 8 21:36:58.010: INFO: Pod pod-projected-configmaps-a1190df4-9475-4d91-bc4b-22b16fa8c013 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:36:58.010: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5853" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":136,"skipped":2121,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:36:58.024: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service nodeport-service with the type=NodePort in namespace services-2482 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-2482 STEP: creating replication controller externalsvc in namespace services-2482 I0408 21:36:58.226769 6 runners.go:189] Created replication controller with name: externalsvc, namespace: services-2482, replica count: 2 I0408 21:37:01.277295 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0408 21:37:04.277512 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName Apr 8 21:37:04.329: INFO: Creating new exec pod Apr 8 21:37:08.362: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-2482 execpodzjvbf -- /bin/sh -x -c nslookup nodeport-service' Apr 8 21:37:08.601: INFO: stderr: "I0408 21:37:08.500957 2027 log.go:172] (0xc000658630) (0xc0006ebe00) Create stream\nI0408 21:37:08.501030 2027 log.go:172] (0xc000658630) (0xc0006ebe00) Stream added, broadcasting: 1\nI0408 21:37:08.505435 2027 log.go:172] (0xc000658630) Reply frame received for 1\nI0408 21:37:08.505504 2027 log.go:172] (0xc000658630) (0xc0006ebea0) Create stream\nI0408 21:37:08.505528 2027 log.go:172] (0xc000658630) (0xc0006ebea0) Stream added, broadcasting: 3\nI0408 21:37:08.509413 2027 log.go:172] (0xc000658630) Reply frame received for 3\nI0408 21:37:08.509531 2027 log.go:172] (0xc000658630) (0xc00067f0e0) Create stream\nI0408 21:37:08.509557 2027 log.go:172] (0xc000658630) (0xc00067f0e0) Stream added, broadcasting: 5\nI0408 21:37:08.512620 2027 log.go:172] (0xc000658630) Reply frame received for 5\nI0408 21:37:08.585822 2027 log.go:172] (0xc000658630) Data frame received for 5\nI0408 21:37:08.585874 2027 log.go:172] (0xc00067f0e0) (5) Data frame handling\nI0408 21:37:08.585911 2027 log.go:172] (0xc00067f0e0) (5) Data frame sent\n+ nslookup nodeport-service\nI0408 21:37:08.592527 2027 log.go:172] (0xc000658630) Data frame received for 3\nI0408 21:37:08.592565 2027 log.go:172] (0xc0006ebea0) (3) Data frame handling\nI0408 21:37:08.592604 2027 log.go:172] (0xc0006ebea0) (3) Data frame sent\nI0408 21:37:08.593863 2027 log.go:172] (0xc000658630) Data frame received for 3\nI0408 21:37:08.593895 2027 log.go:172] (0xc0006ebea0) (3) Data frame handling\nI0408 21:37:08.593917 2027 log.go:172] (0xc0006ebea0) (3) Data frame sent\nI0408 21:37:08.594367 2027 log.go:172] (0xc000658630) Data frame received for 5\nI0408 21:37:08.594398 2027 log.go:172] (0xc00067f0e0) (5) Data frame handling\nI0408 21:37:08.594430 2027 log.go:172] (0xc000658630) Data frame received for 3\nI0408 21:37:08.594450 2027 log.go:172] (0xc0006ebea0) (3) Data frame handling\nI0408 21:37:08.596280 2027 log.go:172] (0xc000658630) Data frame received for 1\nI0408 21:37:08.596323 2027 log.go:172] (0xc0006ebe00) (1) Data frame handling\nI0408 21:37:08.596354 2027 log.go:172] (0xc0006ebe00) (1) Data frame sent\nI0408 21:37:08.596376 2027 log.go:172] (0xc000658630) (0xc0006ebe00) Stream removed, broadcasting: 1\nI0408 21:37:08.596401 2027 log.go:172] (0xc000658630) Go away received\nI0408 21:37:08.596890 2027 log.go:172] (0xc000658630) (0xc0006ebe00) Stream removed, broadcasting: 1\nI0408 21:37:08.596911 2027 log.go:172] (0xc000658630) (0xc0006ebea0) Stream removed, broadcasting: 3\nI0408 21:37:08.596923 2027 log.go:172] (0xc000658630) (0xc00067f0e0) Stream removed, broadcasting: 5\n" Apr 8 21:37:08.601: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-2482.svc.cluster.local\tcanonical name = externalsvc.services-2482.svc.cluster.local.\nName:\texternalsvc.services-2482.svc.cluster.local\nAddress: 10.104.151.129\n\n" STEP: deleting ReplicationController externalsvc in namespace services-2482, will wait for the garbage collector to delete the pods Apr 8 21:37:08.662: INFO: Deleting ReplicationController externalsvc took: 6.525134ms Apr 8 21:37:08.762: INFO: Terminating ReplicationController externalsvc pods took: 100.237761ms Apr 8 21:37:19.608: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:37:19.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2482" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:21.647 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":278,"completed":137,"skipped":2135,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:37:19.671: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod Apr 8 21:37:24.258: INFO: Successfully updated pod "annotationupdate93dfd9af-34c4-4b47-b29b-afd6b182a03f" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:37:26.300: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5334" for this suite. • [SLOW TEST:6.637 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":138,"skipped":2160,"failed":0} SSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:37:26.309: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-2299 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 8 21:37:26.352: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Apr 8 21:37:50.458: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.161 8081 | grep -v '^\s*$'] Namespace:pod-network-test-2299 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 8 21:37:50.458: INFO: >>> kubeConfig: /root/.kube/config I0408 21:37:50.486247 6 log.go:172] (0xc001690370) (0xc000f472c0) Create stream I0408 21:37:50.486272 6 log.go:172] (0xc001690370) (0xc000f472c0) Stream added, broadcasting: 1 I0408 21:37:50.487900 6 log.go:172] (0xc001690370) Reply frame received for 1 I0408 21:37:50.487935 6 log.go:172] (0xc001690370) (0xc000d02a00) Create stream I0408 21:37:50.487948 6 log.go:172] (0xc001690370) (0xc000d02a00) Stream added, broadcasting: 3 I0408 21:37:50.488810 6 log.go:172] (0xc001690370) Reply frame received for 3 I0408 21:37:50.488836 6 log.go:172] (0xc001690370) (0xc000f477c0) Create stream I0408 21:37:50.488844 6 log.go:172] (0xc001690370) (0xc000f477c0) Stream added, broadcasting: 5 I0408 21:37:50.489785 6 log.go:172] (0xc001690370) Reply frame received for 5 I0408 21:37:51.581778 6 log.go:172] (0xc001690370) Data frame received for 3 I0408 21:37:51.581813 6 log.go:172] (0xc000d02a00) (3) Data frame handling I0408 21:37:51.581839 6 log.go:172] (0xc000d02a00) (3) Data frame sent I0408 21:37:51.581864 6 log.go:172] (0xc001690370) Data frame received for 3 I0408 21:37:51.581876 6 log.go:172] (0xc000d02a00) (3) Data frame handling I0408 21:37:51.582202 6 log.go:172] (0xc001690370) Data frame received for 5 I0408 21:37:51.582235 6 log.go:172] (0xc000f477c0) (5) Data frame handling I0408 21:37:51.584223 6 log.go:172] (0xc001690370) Data frame received for 1 I0408 21:37:51.584247 6 log.go:172] (0xc000f472c0) (1) Data frame handling I0408 21:37:51.584260 6 log.go:172] (0xc000f472c0) (1) Data frame sent I0408 21:37:51.584380 6 log.go:172] (0xc001690370) (0xc000f472c0) Stream removed, broadcasting: 1 I0408 21:37:51.584482 6 log.go:172] (0xc001690370) (0xc000f472c0) Stream removed, broadcasting: 1 I0408 21:37:51.584510 6 log.go:172] (0xc001690370) (0xc000d02a00) Stream removed, broadcasting: 3 I0408 21:37:51.584609 6 log.go:172] (0xc001690370) Go away received I0408 21:37:51.584753 6 log.go:172] (0xc001690370) (0xc000f477c0) Stream removed, broadcasting: 5 Apr 8 21:37:51.584: INFO: Found all expected endpoints: [netserver-0] Apr 8 21:37:51.588: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.222 8081 | grep -v '^\s*$'] Namespace:pod-network-test-2299 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 8 21:37:51.588: INFO: >>> kubeConfig: /root/.kube/config I0408 21:37:51.623546 6 log.go:172] (0xc001690c60) (0xc000cce1e0) Create stream I0408 21:37:51.623586 6 log.go:172] (0xc001690c60) (0xc000cce1e0) Stream added, broadcasting: 1 I0408 21:37:51.625982 6 log.go:172] (0xc001690c60) Reply frame received for 1 I0408 21:37:51.626039 6 log.go:172] (0xc001690c60) (0xc000cce3c0) Create stream I0408 21:37:51.626064 6 log.go:172] (0xc001690c60) (0xc000cce3c0) Stream added, broadcasting: 3 I0408 21:37:51.627120 6 log.go:172] (0xc001690c60) Reply frame received for 3 I0408 21:37:51.627157 6 log.go:172] (0xc001690c60) (0xc00135c140) Create stream I0408 21:37:51.627170 6 log.go:172] (0xc001690c60) (0xc00135c140) Stream added, broadcasting: 5 I0408 21:37:51.627997 6 log.go:172] (0xc001690c60) Reply frame received for 5 I0408 21:37:52.734504 6 log.go:172] (0xc001690c60) Data frame received for 5 I0408 21:37:52.734561 6 log.go:172] (0xc00135c140) (5) Data frame handling I0408 21:37:52.734607 6 log.go:172] (0xc001690c60) Data frame received for 3 I0408 21:37:52.734628 6 log.go:172] (0xc000cce3c0) (3) Data frame handling I0408 21:37:52.734653 6 log.go:172] (0xc000cce3c0) (3) Data frame sent I0408 21:37:52.734674 6 log.go:172] (0xc001690c60) Data frame received for 3 I0408 21:37:52.734701 6 log.go:172] (0xc000cce3c0) (3) Data frame handling I0408 21:37:52.736403 6 log.go:172] (0xc001690c60) Data frame received for 1 I0408 21:37:52.736436 6 log.go:172] (0xc000cce1e0) (1) Data frame handling I0408 21:37:52.736458 6 log.go:172] (0xc000cce1e0) (1) Data frame sent I0408 21:37:52.736468 6 log.go:172] (0xc001690c60) (0xc000cce1e0) Stream removed, broadcasting: 1 I0408 21:37:52.736549 6 log.go:172] (0xc001690c60) (0xc000cce1e0) Stream removed, broadcasting: 1 I0408 21:37:52.736559 6 log.go:172] (0xc001690c60) (0xc000cce3c0) Stream removed, broadcasting: 3 I0408 21:37:52.736615 6 log.go:172] (0xc001690c60) Go away received I0408 21:37:52.736659 6 log.go:172] (0xc001690c60) (0xc00135c140) Stream removed, broadcasting: 5 Apr 8 21:37:52.736: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:37:52.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-2299" for this suite. • [SLOW TEST:26.435 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":139,"skipped":2167,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:37:52.745: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Apr 8 21:37:52.870: INFO: Waiting up to 5m0s for pod "downward-api-97072dd0-7d57-49b3-b861-72cb478cbc1f" in namespace "downward-api-5628" to be "success or failure" Apr 8 21:37:52.879: INFO: Pod "downward-api-97072dd0-7d57-49b3-b861-72cb478cbc1f": Phase="Pending", Reason="", readiness=false. Elapsed: 9.142827ms Apr 8 21:37:54.884: INFO: Pod "downward-api-97072dd0-7d57-49b3-b861-72cb478cbc1f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013845635s Apr 8 21:37:56.887: INFO: Pod "downward-api-97072dd0-7d57-49b3-b861-72cb478cbc1f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017259752s STEP: Saw pod success Apr 8 21:37:56.887: INFO: Pod "downward-api-97072dd0-7d57-49b3-b861-72cb478cbc1f" satisfied condition "success or failure" Apr 8 21:37:56.889: INFO: Trying to get logs from node jerma-worker pod downward-api-97072dd0-7d57-49b3-b861-72cb478cbc1f container dapi-container: STEP: delete the pod Apr 8 21:37:56.909: INFO: Waiting for pod downward-api-97072dd0-7d57-49b3-b861-72cb478cbc1f to disappear Apr 8 21:37:56.914: INFO: Pod downward-api-97072dd0-7d57-49b3-b861-72cb478cbc1f no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:37:56.914: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5628" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":278,"completed":140,"skipped":2188,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:37:56.920: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 8 21:37:56.976: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3fa802d0-2877-4d86-b785-5859c8e9b1bb" in namespace "projected-1922" to be "success or failure" Apr 8 21:37:56.980: INFO: Pod "downwardapi-volume-3fa802d0-2877-4d86-b785-5859c8e9b1bb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.372649ms Apr 8 21:37:59.018: INFO: Pod "downwardapi-volume-3fa802d0-2877-4d86-b785-5859c8e9b1bb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042040157s Apr 8 21:38:01.022: INFO: Pod "downwardapi-volume-3fa802d0-2877-4d86-b785-5859c8e9b1bb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.046168673s STEP: Saw pod success Apr 8 21:38:01.022: INFO: Pod "downwardapi-volume-3fa802d0-2877-4d86-b785-5859c8e9b1bb" satisfied condition "success or failure" Apr 8 21:38:01.025: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-3fa802d0-2877-4d86-b785-5859c8e9b1bb container client-container: STEP: delete the pod Apr 8 21:38:01.056: INFO: Waiting for pod downwardapi-volume-3fa802d0-2877-4d86-b785-5859c8e9b1bb to disappear Apr 8 21:38:01.066: INFO: Pod downwardapi-volume-3fa802d0-2877-4d86-b785-5859c8e9b1bb no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:38:01.066: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1922" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":141,"skipped":2196,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:38:01.073: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test hostPath mode Apr 8 21:38:01.145: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-6603" to be "success or failure" Apr 8 21:38:01.162: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 16.893756ms Apr 8 21:38:03.166: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020913412s Apr 8 21:38:05.170: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 4.025126083s Apr 8 21:38:07.174: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.02904232s STEP: Saw pod success Apr 8 21:38:07.174: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" Apr 8 21:38:07.177: INFO: Trying to get logs from node jerma-worker2 pod pod-host-path-test container test-container-1: STEP: delete the pod Apr 8 21:38:07.206: INFO: Waiting for pod pod-host-path-test to disappear Apr 8 21:38:07.227: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:38:07.227: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-6603" for this suite. • [SLOW TEST:6.161 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":142,"skipped":2214,"failed":0} SSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:38:07.234: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-5164 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating stateful set ss in namespace statefulset-5164 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-5164 Apr 8 21:38:07.384: INFO: Found 0 stateful pods, waiting for 1 Apr 8 21:38:17.388: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Apr 8 21:38:17.392: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5164 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 8 21:38:17.635: INFO: stderr: "I0408 21:38:17.530019 2050 log.go:172] (0xc000918790) (0xc0002a23c0) Create stream\nI0408 21:38:17.530083 2050 log.go:172] (0xc000918790) (0xc0002a23c0) Stream added, broadcasting: 1\nI0408 21:38:17.532485 2050 log.go:172] (0xc000918790) Reply frame received for 1\nI0408 21:38:17.532520 2050 log.go:172] (0xc000918790) (0xc0007a2000) Create stream\nI0408 21:38:17.532530 2050 log.go:172] (0xc000918790) (0xc0007a2000) Stream added, broadcasting: 3\nI0408 21:38:17.533353 2050 log.go:172] (0xc000918790) Reply frame received for 3\nI0408 21:38:17.533393 2050 log.go:172] (0xc000918790) (0xc0002a2460) Create stream\nI0408 21:38:17.533405 2050 log.go:172] (0xc000918790) (0xc0002a2460) Stream added, broadcasting: 5\nI0408 21:38:17.534373 2050 log.go:172] (0xc000918790) Reply frame received for 5\nI0408 21:38:17.599421 2050 log.go:172] (0xc000918790) Data frame received for 5\nI0408 21:38:17.599450 2050 log.go:172] (0xc0002a2460) (5) Data frame handling\nI0408 21:38:17.599472 2050 log.go:172] (0xc0002a2460) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0408 21:38:17.627926 2050 log.go:172] (0xc000918790) Data frame received for 3\nI0408 21:38:17.627948 2050 log.go:172] (0xc0007a2000) (3) Data frame handling\nI0408 21:38:17.627960 2050 log.go:172] (0xc0007a2000) (3) Data frame sent\nI0408 21:38:17.628018 2050 log.go:172] (0xc000918790) Data frame received for 5\nI0408 21:38:17.628027 2050 log.go:172] (0xc0002a2460) (5) Data frame handling\nI0408 21:38:17.628476 2050 log.go:172] (0xc000918790) Data frame received for 3\nI0408 21:38:17.628491 2050 log.go:172] (0xc0007a2000) (3) Data frame handling\nI0408 21:38:17.630637 2050 log.go:172] (0xc000918790) Data frame received for 1\nI0408 21:38:17.630650 2050 log.go:172] (0xc0002a23c0) (1) Data frame handling\nI0408 21:38:17.630659 2050 log.go:172] (0xc0002a23c0) (1) Data frame sent\nI0408 21:38:17.630671 2050 log.go:172] (0xc000918790) (0xc0002a23c0) Stream removed, broadcasting: 1\nI0408 21:38:17.630778 2050 log.go:172] (0xc000918790) Go away received\nI0408 21:38:17.630893 2050 log.go:172] (0xc000918790) (0xc0002a23c0) Stream removed, broadcasting: 1\nI0408 21:38:17.630904 2050 log.go:172] (0xc000918790) (0xc0007a2000) Stream removed, broadcasting: 3\nI0408 21:38:17.630909 2050 log.go:172] (0xc000918790) (0xc0002a2460) Stream removed, broadcasting: 5\n" Apr 8 21:38:17.635: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 8 21:38:17.635: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 8 21:38:17.639: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Apr 8 21:38:27.644: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 8 21:38:27.644: INFO: Waiting for statefulset status.replicas updated to 0 Apr 8 21:38:27.664: INFO: POD NODE PHASE GRACE CONDITIONS Apr 8 21:38:27.664: INFO: ss-0 jerma-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-08 21:38:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-08 21:38:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-08 21:38:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-08 21:38:07 +0000 UTC }] Apr 8 21:38:27.664: INFO: Apr 8 21:38:27.664: INFO: StatefulSet ss has not reached scale 3, at 1 Apr 8 21:38:28.669: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.992968948s Apr 8 21:38:29.725: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.988516157s Apr 8 21:38:30.730: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.932043907s Apr 8 21:38:31.735: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.927256931s Apr 8 21:38:32.740: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.922166645s Apr 8 21:38:33.745: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.916976917s Apr 8 21:38:34.750: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.912104885s Apr 8 21:38:35.755: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.906941682s Apr 8 21:38:36.760: INFO: Verifying statefulset ss doesn't scale past 3 for another 901.875822ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-5164 Apr 8 21:38:37.765: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5164 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 8 21:38:38.009: INFO: stderr: "I0408 21:38:37.920898 2070 log.go:172] (0xc000590dc0) (0xc00071eaa0) Create stream\nI0408 21:38:37.920964 2070 log.go:172] (0xc000590dc0) (0xc00071eaa0) Stream added, broadcasting: 1\nI0408 21:38:37.923830 2070 log.go:172] (0xc000590dc0) Reply frame received for 1\nI0408 21:38:37.923871 2070 log.go:172] (0xc000590dc0) (0xc0006a99a0) Create stream\nI0408 21:38:37.923883 2070 log.go:172] (0xc000590dc0) (0xc0006a99a0) Stream added, broadcasting: 3\nI0408 21:38:37.924847 2070 log.go:172] (0xc000590dc0) Reply frame received for 3\nI0408 21:38:37.924908 2070 log.go:172] (0xc000590dc0) (0xc000a9a000) Create stream\nI0408 21:38:37.924935 2070 log.go:172] (0xc000590dc0) (0xc000a9a000) Stream added, broadcasting: 5\nI0408 21:38:37.926276 2070 log.go:172] (0xc000590dc0) Reply frame received for 5\nI0408 21:38:38.001945 2070 log.go:172] (0xc000590dc0) Data frame received for 3\nI0408 21:38:38.002052 2070 log.go:172] (0xc0006a99a0) (3) Data frame handling\nI0408 21:38:38.002100 2070 log.go:172] (0xc0006a99a0) (3) Data frame sent\nI0408 21:38:38.002117 2070 log.go:172] (0xc000590dc0) Data frame received for 3\nI0408 21:38:38.002127 2070 log.go:172] (0xc0006a99a0) (3) Data frame handling\nI0408 21:38:38.002159 2070 log.go:172] (0xc000590dc0) Data frame received for 5\nI0408 21:38:38.002193 2070 log.go:172] (0xc000a9a000) (5) Data frame handling\nI0408 21:38:38.002217 2070 log.go:172] (0xc000a9a000) (5) Data frame sent\nI0408 21:38:38.002231 2070 log.go:172] (0xc000590dc0) Data frame received for 5\nI0408 21:38:38.002247 2070 log.go:172] (0xc000a9a000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0408 21:38:38.003605 2070 log.go:172] (0xc000590dc0) Data frame received for 1\nI0408 21:38:38.003632 2070 log.go:172] (0xc00071eaa0) (1) Data frame handling\nI0408 21:38:38.003661 2070 log.go:172] (0xc00071eaa0) (1) Data frame sent\nI0408 21:38:38.003780 2070 log.go:172] (0xc000590dc0) (0xc00071eaa0) Stream removed, broadcasting: 1\nI0408 21:38:38.004093 2070 log.go:172] (0xc000590dc0) Go away received\nI0408 21:38:38.004211 2070 log.go:172] (0xc000590dc0) (0xc00071eaa0) Stream removed, broadcasting: 1\nI0408 21:38:38.004244 2070 log.go:172] (0xc000590dc0) (0xc0006a99a0) Stream removed, broadcasting: 3\nI0408 21:38:38.004260 2070 log.go:172] (0xc000590dc0) (0xc000a9a000) Stream removed, broadcasting: 5\n" Apr 8 21:38:38.009: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 8 21:38:38.009: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 8 21:38:38.009: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5164 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 8 21:38:38.225: INFO: stderr: "I0408 21:38:38.159056 2091 log.go:172] (0xc000ae80b0) (0xc0008219a0) Create stream\nI0408 21:38:38.159131 2091 log.go:172] (0xc000ae80b0) (0xc0008219a0) Stream added, broadcasting: 1\nI0408 21:38:38.161608 2091 log.go:172] (0xc000ae80b0) Reply frame received for 1\nI0408 21:38:38.161669 2091 log.go:172] (0xc000ae80b0) (0xc000821b80) Create stream\nI0408 21:38:38.161687 2091 log.go:172] (0xc000ae80b0) (0xc000821b80) Stream added, broadcasting: 3\nI0408 21:38:38.162631 2091 log.go:172] (0xc000ae80b0) Reply frame received for 3\nI0408 21:38:38.162667 2091 log.go:172] (0xc000ae80b0) (0xc000821c20) Create stream\nI0408 21:38:38.162678 2091 log.go:172] (0xc000ae80b0) (0xc000821c20) Stream added, broadcasting: 5\nI0408 21:38:38.163568 2091 log.go:172] (0xc000ae80b0) Reply frame received for 5\nI0408 21:38:38.218062 2091 log.go:172] (0xc000ae80b0) Data frame received for 3\nI0408 21:38:38.218126 2091 log.go:172] (0xc000ae80b0) Data frame received for 5\nI0408 21:38:38.218182 2091 log.go:172] (0xc000821c20) (5) Data frame handling\nI0408 21:38:38.218206 2091 log.go:172] (0xc000821c20) (5) Data frame sent\nI0408 21:38:38.218219 2091 log.go:172] (0xc000ae80b0) Data frame received for 5\nI0408 21:38:38.218232 2091 log.go:172] (0xc000821c20) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0408 21:38:38.218264 2091 log.go:172] (0xc000821b80) (3) Data frame handling\nI0408 21:38:38.218279 2091 log.go:172] (0xc000821b80) (3) Data frame sent\nI0408 21:38:38.218338 2091 log.go:172] (0xc000ae80b0) Data frame received for 3\nI0408 21:38:38.218359 2091 log.go:172] (0xc000821b80) (3) Data frame handling\nI0408 21:38:38.219984 2091 log.go:172] (0xc000ae80b0) Data frame received for 1\nI0408 21:38:38.220016 2091 log.go:172] (0xc0008219a0) (1) Data frame handling\nI0408 21:38:38.220034 2091 log.go:172] (0xc0008219a0) (1) Data frame sent\nI0408 21:38:38.220048 2091 log.go:172] (0xc000ae80b0) (0xc0008219a0) Stream removed, broadcasting: 1\nI0408 21:38:38.220086 2091 log.go:172] (0xc000ae80b0) Go away received\nI0408 21:38:38.220452 2091 log.go:172] (0xc000ae80b0) (0xc0008219a0) Stream removed, broadcasting: 1\nI0408 21:38:38.220472 2091 log.go:172] (0xc000ae80b0) (0xc000821b80) Stream removed, broadcasting: 3\nI0408 21:38:38.220484 2091 log.go:172] (0xc000ae80b0) (0xc000821c20) Stream removed, broadcasting: 5\n" Apr 8 21:38:38.225: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 8 21:38:38.226: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 8 21:38:38.226: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5164 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 8 21:38:38.442: INFO: stderr: "I0408 21:38:38.357401 2112 log.go:172] (0xc0009daa50) (0xc0009b6640) Create stream\nI0408 21:38:38.357471 2112 log.go:172] (0xc0009daa50) (0xc0009b6640) Stream added, broadcasting: 1\nI0408 21:38:38.362085 2112 log.go:172] (0xc0009daa50) Reply frame received for 1\nI0408 21:38:38.362122 2112 log.go:172] (0xc0009daa50) (0xc000641c20) Create stream\nI0408 21:38:38.362132 2112 log.go:172] (0xc0009daa50) (0xc000641c20) Stream added, broadcasting: 3\nI0408 21:38:38.363165 2112 log.go:172] (0xc0009daa50) Reply frame received for 3\nI0408 21:38:38.363198 2112 log.go:172] (0xc0009daa50) (0xc0005d6820) Create stream\nI0408 21:38:38.363216 2112 log.go:172] (0xc0009daa50) (0xc0005d6820) Stream added, broadcasting: 5\nI0408 21:38:38.364097 2112 log.go:172] (0xc0009daa50) Reply frame received for 5\nI0408 21:38:38.435113 2112 log.go:172] (0xc0009daa50) Data frame received for 5\nI0408 21:38:38.435160 2112 log.go:172] (0xc0005d6820) (5) Data frame handling\nI0408 21:38:38.435183 2112 log.go:172] (0xc0005d6820) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0408 21:38:38.435200 2112 log.go:172] (0xc0009daa50) Data frame received for 5\nI0408 21:38:38.435228 2112 log.go:172] (0xc0005d6820) (5) Data frame handling\nI0408 21:38:38.435248 2112 log.go:172] (0xc0009daa50) Data frame received for 3\nI0408 21:38:38.435268 2112 log.go:172] (0xc000641c20) (3) Data frame handling\nI0408 21:38:38.435284 2112 log.go:172] (0xc000641c20) (3) Data frame sent\nI0408 21:38:38.435293 2112 log.go:172] (0xc0009daa50) Data frame received for 3\nI0408 21:38:38.435299 2112 log.go:172] (0xc000641c20) (3) Data frame handling\nI0408 21:38:38.436745 2112 log.go:172] (0xc0009daa50) Data frame received for 1\nI0408 21:38:38.436773 2112 log.go:172] (0xc0009b6640) (1) Data frame handling\nI0408 21:38:38.436780 2112 log.go:172] (0xc0009b6640) (1) Data frame sent\nI0408 21:38:38.436790 2112 log.go:172] (0xc0009daa50) (0xc0009b6640) Stream removed, broadcasting: 1\nI0408 21:38:38.436806 2112 log.go:172] (0xc0009daa50) Go away received\nI0408 21:38:38.437386 2112 log.go:172] (0xc0009daa50) (0xc0009b6640) Stream removed, broadcasting: 1\nI0408 21:38:38.437412 2112 log.go:172] (0xc0009daa50) (0xc000641c20) Stream removed, broadcasting: 3\nI0408 21:38:38.437425 2112 log.go:172] (0xc0009daa50) (0xc0005d6820) Stream removed, broadcasting: 5\n" Apr 8 21:38:38.442: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 8 21:38:38.442: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 8 21:38:38.446: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Apr 8 21:38:48.451: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Apr 8 21:38:48.451: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Apr 8 21:38:48.451: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Apr 8 21:38:48.455: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5164 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 8 21:38:48.681: INFO: stderr: "I0408 21:38:48.585335 2132 log.go:172] (0xc000996000) (0xc0009d4000) Create stream\nI0408 21:38:48.585402 2132 log.go:172] (0xc000996000) (0xc0009d4000) Stream added, broadcasting: 1\nI0408 21:38:48.588394 2132 log.go:172] (0xc000996000) Reply frame received for 1\nI0408 21:38:48.588426 2132 log.go:172] (0xc000996000) (0xc000956320) Create stream\nI0408 21:38:48.588434 2132 log.go:172] (0xc000996000) (0xc000956320) Stream added, broadcasting: 3\nI0408 21:38:48.589605 2132 log.go:172] (0xc000996000) Reply frame received for 3\nI0408 21:38:48.589630 2132 log.go:172] (0xc000996000) (0xc00073c500) Create stream\nI0408 21:38:48.589639 2132 log.go:172] (0xc000996000) (0xc00073c500) Stream added, broadcasting: 5\nI0408 21:38:48.590415 2132 log.go:172] (0xc000996000) Reply frame received for 5\nI0408 21:38:48.674893 2132 log.go:172] (0xc000996000) Data frame received for 3\nI0408 21:38:48.674939 2132 log.go:172] (0xc000956320) (3) Data frame handling\nI0408 21:38:48.674958 2132 log.go:172] (0xc000956320) (3) Data frame sent\nI0408 21:38:48.674969 2132 log.go:172] (0xc000996000) Data frame received for 3\nI0408 21:38:48.674980 2132 log.go:172] (0xc000956320) (3) Data frame handling\nI0408 21:38:48.675105 2132 log.go:172] (0xc000996000) Data frame received for 5\nI0408 21:38:48.675124 2132 log.go:172] (0xc00073c500) (5) Data frame handling\nI0408 21:38:48.675137 2132 log.go:172] (0xc00073c500) (5) Data frame sent\nI0408 21:38:48.675150 2132 log.go:172] (0xc000996000) Data frame received for 5\nI0408 21:38:48.675161 2132 log.go:172] (0xc00073c500) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0408 21:38:48.676735 2132 log.go:172] (0xc000996000) Data frame received for 1\nI0408 21:38:48.676760 2132 log.go:172] (0xc0009d4000) (1) Data frame handling\nI0408 21:38:48.676810 2132 log.go:172] (0xc0009d4000) (1) Data frame sent\nI0408 21:38:48.676842 2132 log.go:172] (0xc000996000) (0xc0009d4000) Stream removed, broadcasting: 1\nI0408 21:38:48.676883 2132 log.go:172] (0xc000996000) Go away received\nI0408 21:38:48.677558 2132 log.go:172] (0xc000996000) (0xc0009d4000) Stream removed, broadcasting: 1\nI0408 21:38:48.677588 2132 log.go:172] (0xc000996000) (0xc000956320) Stream removed, broadcasting: 3\nI0408 21:38:48.677601 2132 log.go:172] (0xc000996000) (0xc00073c500) Stream removed, broadcasting: 5\n" Apr 8 21:38:48.682: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 8 21:38:48.682: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 8 21:38:48.682: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5164 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 8 21:38:48.934: INFO: stderr: "I0408 21:38:48.816563 2155 log.go:172] (0xc000a346e0) (0xc000ab4140) Create stream\nI0408 21:38:48.816616 2155 log.go:172] (0xc000a346e0) (0xc000ab4140) Stream added, broadcasting: 1\nI0408 21:38:48.818869 2155 log.go:172] (0xc000a346e0) Reply frame received for 1\nI0408 21:38:48.818902 2155 log.go:172] (0xc000a346e0) (0xc000ac8000) Create stream\nI0408 21:38:48.818911 2155 log.go:172] (0xc000a346e0) (0xc000ac8000) Stream added, broadcasting: 3\nI0408 21:38:48.819999 2155 log.go:172] (0xc000a346e0) Reply frame received for 3\nI0408 21:38:48.820020 2155 log.go:172] (0xc000a346e0) (0xc0002834a0) Create stream\nI0408 21:38:48.820028 2155 log.go:172] (0xc000a346e0) (0xc0002834a0) Stream added, broadcasting: 5\nI0408 21:38:48.821346 2155 log.go:172] (0xc000a346e0) Reply frame received for 5\nI0408 21:38:48.899411 2155 log.go:172] (0xc000a346e0) Data frame received for 5\nI0408 21:38:48.899447 2155 log.go:172] (0xc0002834a0) (5) Data frame handling\nI0408 21:38:48.899465 2155 log.go:172] (0xc0002834a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0408 21:38:48.926617 2155 log.go:172] (0xc000a346e0) Data frame received for 5\nI0408 21:38:48.926674 2155 log.go:172] (0xc0002834a0) (5) Data frame handling\nI0408 21:38:48.926711 2155 log.go:172] (0xc000a346e0) Data frame received for 3\nI0408 21:38:48.926736 2155 log.go:172] (0xc000ac8000) (3) Data frame handling\nI0408 21:38:48.926767 2155 log.go:172] (0xc000ac8000) (3) Data frame sent\nI0408 21:38:48.926787 2155 log.go:172] (0xc000a346e0) Data frame received for 3\nI0408 21:38:48.926806 2155 log.go:172] (0xc000ac8000) (3) Data frame handling\nI0408 21:38:48.928323 2155 log.go:172] (0xc000a346e0) Data frame received for 1\nI0408 21:38:48.928375 2155 log.go:172] (0xc000ab4140) (1) Data frame handling\nI0408 21:38:48.928411 2155 log.go:172] (0xc000ab4140) (1) Data frame sent\nI0408 21:38:48.928443 2155 log.go:172] (0xc000a346e0) (0xc000ab4140) Stream removed, broadcasting: 1\nI0408 21:38:48.928485 2155 log.go:172] (0xc000a346e0) Go away received\nI0408 21:38:48.929061 2155 log.go:172] (0xc000a346e0) (0xc000ab4140) Stream removed, broadcasting: 1\nI0408 21:38:48.929090 2155 log.go:172] (0xc000a346e0) (0xc000ac8000) Stream removed, broadcasting: 3\nI0408 21:38:48.929283 2155 log.go:172] (0xc000a346e0) (0xc0002834a0) Stream removed, broadcasting: 5\n" Apr 8 21:38:48.934: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 8 21:38:48.934: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 8 21:38:48.934: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5164 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 8 21:38:49.205: INFO: stderr: "I0408 21:38:49.091145 2175 log.go:172] (0xc0009086e0) (0xc000723540) Create stream\nI0408 21:38:49.091205 2175 log.go:172] (0xc0009086e0) (0xc000723540) Stream added, broadcasting: 1\nI0408 21:38:49.093903 2175 log.go:172] (0xc0009086e0) Reply frame received for 1\nI0408 21:38:49.094030 2175 log.go:172] (0xc0009086e0) (0xc000a660a0) Create stream\nI0408 21:38:49.094060 2175 log.go:172] (0xc0009086e0) (0xc000a660a0) Stream added, broadcasting: 3\nI0408 21:38:49.095356 2175 log.go:172] (0xc0009086e0) Reply frame received for 3\nI0408 21:38:49.095433 2175 log.go:172] (0xc0009086e0) (0xc000a66140) Create stream\nI0408 21:38:49.095458 2175 log.go:172] (0xc0009086e0) (0xc000a66140) Stream added, broadcasting: 5\nI0408 21:38:49.096703 2175 log.go:172] (0xc0009086e0) Reply frame received for 5\nI0408 21:38:49.168029 2175 log.go:172] (0xc0009086e0) Data frame received for 5\nI0408 21:38:49.168058 2175 log.go:172] (0xc000a66140) (5) Data frame handling\nI0408 21:38:49.168079 2175 log.go:172] (0xc000a66140) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0408 21:38:49.196388 2175 log.go:172] (0xc0009086e0) Data frame received for 3\nI0408 21:38:49.196534 2175 log.go:172] (0xc000a660a0) (3) Data frame handling\nI0408 21:38:49.196587 2175 log.go:172] (0xc000a660a0) (3) Data frame sent\nI0408 21:38:49.197052 2175 log.go:172] (0xc0009086e0) Data frame received for 5\nI0408 21:38:49.197083 2175 log.go:172] (0xc000a66140) (5) Data frame handling\nI0408 21:38:49.197247 2175 log.go:172] (0xc0009086e0) Data frame received for 3\nI0408 21:38:49.197269 2175 log.go:172] (0xc000a660a0) (3) Data frame handling\nI0408 21:38:49.199258 2175 log.go:172] (0xc0009086e0) Data frame received for 1\nI0408 21:38:49.199374 2175 log.go:172] (0xc000723540) (1) Data frame handling\nI0408 21:38:49.199430 2175 log.go:172] (0xc000723540) (1) Data frame sent\nI0408 21:38:49.199453 2175 log.go:172] (0xc0009086e0) (0xc000723540) Stream removed, broadcasting: 1\nI0408 21:38:49.199476 2175 log.go:172] (0xc0009086e0) Go away received\nI0408 21:38:49.199884 2175 log.go:172] (0xc0009086e0) (0xc000723540) Stream removed, broadcasting: 1\nI0408 21:38:49.199904 2175 log.go:172] (0xc0009086e0) (0xc000a660a0) Stream removed, broadcasting: 3\nI0408 21:38:49.199915 2175 log.go:172] (0xc0009086e0) (0xc000a66140) Stream removed, broadcasting: 5\n" Apr 8 21:38:49.205: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 8 21:38:49.205: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 8 21:38:49.205: INFO: Waiting for statefulset status.replicas updated to 0 Apr 8 21:38:49.208: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 Apr 8 21:38:59.217: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 8 21:38:59.217: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Apr 8 21:38:59.217: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Apr 8 21:38:59.234: INFO: POD NODE PHASE GRACE CONDITIONS Apr 8 21:38:59.234: INFO: ss-0 jerma-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-08 21:38:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-08 21:38:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-08 21:38:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-08 21:38:07 +0000 UTC }] Apr 8 21:38:59.234: INFO: ss-1 jerma-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-08 21:38:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-08 21:38:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-08 21:38:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-08 21:38:27 +0000 UTC }] Apr 8 21:38:59.234: INFO: ss-2 jerma-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-08 21:38:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-08 21:38:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-08 21:38:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-08 21:38:27 +0000 UTC }] Apr 8 21:38:59.234: INFO: Apr 8 21:38:59.234: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 8 21:39:00.312: INFO: POD NODE PHASE GRACE CONDITIONS Apr 8 21:39:00.312: INFO: ss-0 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-08 21:38:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-08 21:38:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-08 21:38:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-08 21:38:07 +0000 UTC }] Apr 8 21:39:00.313: INFO: ss-1 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-08 21:38:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-08 21:38:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-08 21:38:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-08 21:38:27 +0000 UTC }] Apr 8 21:39:00.313: INFO: ss-2 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-08 21:38:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-08 21:38:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-08 21:38:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-08 21:38:27 +0000 UTC }] Apr 8 21:39:00.313: INFO: Apr 8 21:39:00.313: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 8 21:39:01.318: INFO: POD NODE PHASE GRACE CONDITIONS Apr 8 21:39:01.318: INFO: ss-0 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-08 21:38:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-08 21:38:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-08 21:38:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-08 21:38:07 +0000 UTC }] Apr 8 21:39:01.318: INFO: ss-1 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-08 21:38:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-08 21:38:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-08 21:38:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-08 21:38:27 +0000 UTC }] Apr 8 21:39:01.318: INFO: ss-2 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-08 21:38:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-08 21:38:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-08 21:38:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-08 21:38:27 +0000 UTC }] Apr 8 21:39:01.318: INFO: Apr 8 21:39:01.318: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 8 21:39:02.323: INFO: POD NODE PHASE GRACE CONDITIONS Apr 8 21:39:02.323: INFO: ss-0 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-08 21:38:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-08 21:38:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-08 21:38:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-08 21:38:07 +0000 UTC }] Apr 8 21:39:02.323: INFO: ss-1 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-08 21:38:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-08 21:38:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-08 21:38:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-08 21:38:27 +0000 UTC }] Apr 8 21:39:02.323: INFO: ss-2 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-08 21:38:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-08 21:38:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-08 21:38:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-08 21:38:27 +0000 UTC }] Apr 8 21:39:02.323: INFO: Apr 8 21:39:02.323: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 8 21:39:03.328: INFO: POD NODE PHASE GRACE CONDITIONS Apr 8 21:39:03.328: INFO: ss-0 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-08 21:38:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-08 21:38:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-08 21:38:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-08 21:38:07 +0000 UTC }] Apr 8 21:39:03.328: INFO: ss-1 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-08 21:38:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-08 21:38:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-08 21:38:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-08 21:38:27 +0000 UTC }] Apr 8 21:39:03.328: INFO: ss-2 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-08 21:38:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-08 21:38:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-08 21:38:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-08 21:38:27 +0000 UTC }] Apr 8 21:39:03.328: INFO: Apr 8 21:39:03.328: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 8 21:39:04.333: INFO: POD NODE PHASE GRACE CONDITIONS Apr 8 21:39:04.333: INFO: ss-0 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-08 21:38:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-08 21:38:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-08 21:38:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-08 21:38:07 +0000 UTC }] Apr 8 21:39:04.333: INFO: ss-1 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-08 21:38:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-08 21:38:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-08 21:38:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-08 21:38:27 +0000 UTC }] Apr 8 21:39:04.333: INFO: ss-2 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-08 21:38:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-08 21:38:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-08 21:38:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-08 21:38:27 +0000 UTC }] Apr 8 21:39:04.333: INFO: Apr 8 21:39:04.333: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 8 21:39:05.338: INFO: POD NODE PHASE GRACE CONDITIONS Apr 8 21:39:05.338: INFO: ss-0 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-08 21:38:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-08 21:38:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-08 21:38:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-08 21:38:07 +0000 UTC }] Apr 8 21:39:05.338: INFO: ss-1 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-08 21:38:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-08 21:38:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-08 21:38:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-08 21:38:27 +0000 UTC }] Apr 8 21:39:05.338: INFO: ss-2 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-08 21:38:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-08 21:38:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-08 21:38:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-08 21:38:27 +0000 UTC }] Apr 8 21:39:05.338: INFO: Apr 8 21:39:05.338: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 8 21:39:06.343: INFO: POD NODE PHASE GRACE CONDITIONS Apr 8 21:39:06.343: INFO: ss-0 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-08 21:38:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-08 21:38:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-08 21:38:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-08 21:38:07 +0000 UTC }] Apr 8 21:39:06.343: INFO: ss-1 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-08 21:38:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-08 21:38:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-08 21:38:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-08 21:38:27 +0000 UTC }] Apr 8 21:39:06.343: INFO: ss-2 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-08 21:38:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-08 21:38:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-08 21:38:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-08 21:38:27 +0000 UTC }] Apr 8 21:39:06.343: INFO: Apr 8 21:39:06.343: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 8 21:39:07.349: INFO: POD NODE PHASE GRACE CONDITIONS Apr 8 21:39:07.349: INFO: ss-0 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-08 21:38:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-08 21:38:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-08 21:38:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-08 21:38:07 +0000 UTC }] Apr 8 21:39:07.349: INFO: ss-1 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-08 21:38:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-08 21:38:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-08 21:38:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-08 21:38:27 +0000 UTC }] Apr 8 21:39:07.349: INFO: ss-2 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-08 21:38:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-08 21:38:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-08 21:38:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-08 21:38:27 +0000 UTC }] Apr 8 21:39:07.349: INFO: Apr 8 21:39:07.349: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 8 21:39:08.354: INFO: POD NODE PHASE GRACE CONDITIONS Apr 8 21:39:08.354: INFO: ss-0 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-08 21:38:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-08 21:38:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-08 21:38:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-08 21:38:07 +0000 UTC }] Apr 8 21:39:08.354: INFO: ss-1 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-08 21:38:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-08 21:38:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-08 21:38:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-08 21:38:27 +0000 UTC }] Apr 8 21:39:08.354: INFO: ss-2 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-08 21:38:27 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-08 21:38:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-08 21:38:49 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-08 21:38:27 +0000 UTC }] Apr 8 21:39:08.354: INFO: Apr 8 21:39:08.354: INFO: StatefulSet ss has not reached scale 0, at 3 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-5164 Apr 8 21:39:09.359: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5164 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 8 21:39:09.506: INFO: rc: 1 Apr 8 21:39:09.506: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5164 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: error: unable to upgrade connection: container not found ("webserver") error: exit status 1 Apr 8 21:39:19.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5164 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 8 21:39:19.616: INFO: rc: 1 Apr 8 21:39:19.616: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5164 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Apr 8 21:39:29.616: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5164 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 8 21:39:29.711: INFO: rc: 1 Apr 8 21:39:29.711: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5164 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Apr 8 21:39:39.711: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5164 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 8 21:39:39.816: INFO: rc: 1 Apr 8 21:39:39.816: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5164 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Apr 8 21:39:49.817: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5164 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 8 21:39:49.910: INFO: rc: 1 Apr 8 21:39:49.910: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5164 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Apr 8 21:39:59.910: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5164 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 8 21:40:00.007: INFO: rc: 1 Apr 8 21:40:00.007: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5164 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Apr 8 21:40:10.007: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5164 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 8 21:40:10.107: INFO: rc: 1 Apr 8 21:40:10.107: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5164 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Apr 8 21:40:20.107: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5164 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 8 21:40:20.202: INFO: rc: 1 Apr 8 21:40:20.202: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5164 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Apr 8 21:40:30.202: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5164 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 8 21:40:30.308: INFO: rc: 1 Apr 8 21:40:30.308: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5164 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Apr 8 21:40:40.309: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5164 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 8 21:40:40.407: INFO: rc: 1 Apr 8 21:40:40.407: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5164 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Apr 8 21:40:50.408: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5164 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 8 21:40:50.504: INFO: rc: 1 Apr 8 21:40:50.504: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5164 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Apr 8 21:41:00.504: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5164 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 8 21:41:00.616: INFO: rc: 1 Apr 8 21:41:00.616: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5164 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Apr 8 21:41:10.616: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5164 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 8 21:41:10.715: INFO: rc: 1 Apr 8 21:41:10.715: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5164 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Apr 8 21:41:20.715: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5164 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 8 21:41:20.826: INFO: rc: 1 Apr 8 21:41:20.826: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5164 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Apr 8 21:41:30.826: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5164 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 8 21:41:30.931: INFO: rc: 1 Apr 8 21:41:30.931: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5164 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Apr 8 21:41:40.932: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5164 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 8 21:41:41.025: INFO: rc: 1 Apr 8 21:41:41.025: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5164 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Apr 8 21:41:51.025: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5164 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 8 21:41:51.140: INFO: rc: 1 Apr 8 21:41:51.140: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5164 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Apr 8 21:42:01.140: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5164 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 8 21:42:01.236: INFO: rc: 1 Apr 8 21:42:01.236: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5164 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Apr 8 21:42:11.236: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5164 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 8 21:42:11.341: INFO: rc: 1 Apr 8 21:42:11.341: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5164 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Apr 8 21:42:21.341: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5164 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 8 21:42:21.443: INFO: rc: 1 Apr 8 21:42:21.443: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5164 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Apr 8 21:42:31.443: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5164 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 8 21:42:31.538: INFO: rc: 1 Apr 8 21:42:31.538: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5164 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Apr 8 21:42:41.539: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5164 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 8 21:42:41.639: INFO: rc: 1 Apr 8 21:42:41.639: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5164 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Apr 8 21:42:51.639: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5164 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 8 21:42:51.755: INFO: rc: 1 Apr 8 21:42:51.755: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5164 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Apr 8 21:43:01.756: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5164 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 8 21:43:01.861: INFO: rc: 1 Apr 8 21:43:01.861: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5164 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Apr 8 21:43:11.861: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5164 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 8 21:43:11.957: INFO: rc: 1 Apr 8 21:43:11.957: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5164 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Apr 8 21:43:21.957: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5164 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 8 21:43:22.056: INFO: rc: 1 Apr 8 21:43:22.056: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5164 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Apr 8 21:43:32.056: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5164 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 8 21:43:32.159: INFO: rc: 1 Apr 8 21:43:32.159: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5164 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Apr 8 21:43:42.160: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5164 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 8 21:43:42.257: INFO: rc: 1 Apr 8 21:43:42.257: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5164 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Apr 8 21:43:52.258: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5164 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 8 21:43:52.360: INFO: rc: 1 Apr 8 21:43:52.360: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5164 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Apr 8 21:44:02.361: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5164 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 8 21:44:02.452: INFO: rc: 1 Apr 8 21:44:02.452: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5164 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Apr 8 21:44:12.452: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5164 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 8 21:44:12.550: INFO: rc: 1 Apr 8 21:44:12.551: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: Apr 8 21:44:12.551: INFO: Scaling statefulset ss to 0 Apr 8 21:44:12.559: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Apr 8 21:44:12.562: INFO: Deleting all statefulset in ns statefulset-5164 Apr 8 21:44:12.565: INFO: Scaling statefulset ss to 0 Apr 8 21:44:12.573: INFO: Waiting for statefulset status.replicas updated to 0 Apr 8 21:44:12.575: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:44:12.593: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-5164" for this suite. • [SLOW TEST:365.366 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":278,"completed":143,"skipped":2222,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:44:12.601: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 8 21:44:13.089: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 8 21:44:15.113: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721979053, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721979053, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721979053, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721979053, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 8 21:44:18.162: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:44:18.188: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-287" for this suite. STEP: Destroying namespace "webhook-287-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.717 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":278,"completed":144,"skipped":2247,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:44:18.318: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod liveness-b08bd7c1-8243-419b-9912-ab6c1e73fae3 in namespace container-probe-2552 Apr 8 21:44:22.440: INFO: Started pod liveness-b08bd7c1-8243-419b-9912-ab6c1e73fae3 in namespace container-probe-2552 STEP: checking the pod's current state and verifying that restartCount is present Apr 8 21:44:22.443: INFO: Initial restart count of pod liveness-b08bd7c1-8243-419b-9912-ab6c1e73fae3 is 0 Apr 8 21:44:44.548: INFO: Restart count of pod container-probe-2552/liveness-b08bd7c1-8243-419b-9912-ab6c1e73fae3 is now 1 (22.104572524s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:44:44.564: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2552" for this suite. • [SLOW TEST:26.299 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":145,"skipped":2257,"failed":0} SSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:44:44.618: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod Apr 8 21:44:49.222: INFO: Successfully updated pod "annotationupdate064fdfea-c0b0-471b-bfcd-2a371381b885" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:44:51.237: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1077" for this suite. • [SLOW TEST:6.650 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":146,"skipped":2262,"failed":0} SSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:44:51.267: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod busybox-a4f511e0-4f23-43fa-94f9-be6d8984ae5c in namespace container-probe-7569 Apr 8 21:44:55.320: INFO: Started pod busybox-a4f511e0-4f23-43fa-94f9-be6d8984ae5c in namespace container-probe-7569 STEP: checking the pod's current state and verifying that restartCount is present Apr 8 21:44:55.323: INFO: Initial restart count of pod busybox-a4f511e0-4f23-43fa-94f9-be6d8984ae5c is 0 Apr 8 21:45:45.514: INFO: Restart count of pod container-probe-7569/busybox-a4f511e0-4f23-43fa-94f9-be6d8984ae5c is now 1 (50.190489255s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:45:45.542: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7569" for this suite. • [SLOW TEST:54.376 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":147,"skipped":2268,"failed":0} SSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:45:45.644: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-8238840d-4bd4-4381-b481-a5ae7d50cf4d STEP: Creating a pod to test consume configMaps Apr 8 21:45:45.766: INFO: Waiting up to 5m0s for pod "pod-configmaps-d711aa6f-5329-4267-898d-203a6143181f" in namespace "configmap-3457" to be "success or failure" Apr 8 21:45:45.769: INFO: Pod "pod-configmaps-d711aa6f-5329-4267-898d-203a6143181f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.701348ms Apr 8 21:45:47.779: INFO: Pod "pod-configmaps-d711aa6f-5329-4267-898d-203a6143181f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013199278s Apr 8 21:45:49.796: INFO: Pod "pod-configmaps-d711aa6f-5329-4267-898d-203a6143181f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029641812s STEP: Saw pod success Apr 8 21:45:49.796: INFO: Pod "pod-configmaps-d711aa6f-5329-4267-898d-203a6143181f" satisfied condition "success or failure" Apr 8 21:45:49.810: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-d711aa6f-5329-4267-898d-203a6143181f container configmap-volume-test: STEP: delete the pod Apr 8 21:45:49.860: INFO: Waiting for pod pod-configmaps-d711aa6f-5329-4267-898d-203a6143181f to disappear Apr 8 21:45:49.886: INFO: Pod pod-configmaps-d711aa6f-5329-4267-898d-203a6143181f no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:45:49.886: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3457" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":148,"skipped":2275,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:45:49.892: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 8 21:45:53.990: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:45:54.024: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-623" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":149,"skipped":2295,"failed":0} SSSSS ------------------------------ [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:45:54.031: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1996 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-1996;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1996 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-1996;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1996.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-1996.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1996.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-1996.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-1996.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-1996.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-1996.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-1996.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-1996.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-1996.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-1996.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-1996.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1996.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 154.47.103.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.103.47.154_udp@PTR;check="$$(dig +tcp +noall +answer +search 154.47.103.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.103.47.154_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1996 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-1996;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1996 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-1996;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1996.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-1996.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1996.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-1996.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-1996.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-1996.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-1996.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-1996.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-1996.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-1996.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-1996.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-1996.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1996.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 154.47.103.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.103.47.154_udp@PTR;check="$$(dig +tcp +noall +answer +search 154.47.103.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.103.47.154_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 8 21:46:00.250: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-1996/dns-test-ac04f22f-de86-4e06-9758-57c0cc832320: the server could not find the requested resource (get pods dns-test-ac04f22f-de86-4e06-9758-57c0cc832320) Apr 8 21:46:00.253: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-1996/dns-test-ac04f22f-de86-4e06-9758-57c0cc832320: the server could not find the requested resource (get pods dns-test-ac04f22f-de86-4e06-9758-57c0cc832320) Apr 8 21:46:00.256: INFO: Unable to read wheezy_udp@dns-test-service.dns-1996 from pod dns-1996/dns-test-ac04f22f-de86-4e06-9758-57c0cc832320: the server could not find the requested resource (get pods dns-test-ac04f22f-de86-4e06-9758-57c0cc832320) Apr 8 21:46:00.276: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1996 from pod dns-1996/dns-test-ac04f22f-de86-4e06-9758-57c0cc832320: the server could not find the requested resource (get pods dns-test-ac04f22f-de86-4e06-9758-57c0cc832320) Apr 8 21:46:00.279: INFO: Unable to read wheezy_udp@dns-test-service.dns-1996.svc from pod dns-1996/dns-test-ac04f22f-de86-4e06-9758-57c0cc832320: the server could not find the requested resource (get pods dns-test-ac04f22f-de86-4e06-9758-57c0cc832320) Apr 8 21:46:00.282: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1996.svc from pod dns-1996/dns-test-ac04f22f-de86-4e06-9758-57c0cc832320: the server could not find the requested resource (get pods dns-test-ac04f22f-de86-4e06-9758-57c0cc832320) Apr 8 21:46:00.288: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1996.svc from pod dns-1996/dns-test-ac04f22f-de86-4e06-9758-57c0cc832320: the server could not find the requested resource (get pods dns-test-ac04f22f-de86-4e06-9758-57c0cc832320) Apr 8 21:46:00.292: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1996.svc from pod dns-1996/dns-test-ac04f22f-de86-4e06-9758-57c0cc832320: the server could not find the requested resource (get pods dns-test-ac04f22f-de86-4e06-9758-57c0cc832320) Apr 8 21:46:00.310: INFO: Unable to read jessie_udp@dns-test-service from pod dns-1996/dns-test-ac04f22f-de86-4e06-9758-57c0cc832320: the server could not find the requested resource (get pods dns-test-ac04f22f-de86-4e06-9758-57c0cc832320) Apr 8 21:46:00.312: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-1996/dns-test-ac04f22f-de86-4e06-9758-57c0cc832320: the server could not find the requested resource (get pods dns-test-ac04f22f-de86-4e06-9758-57c0cc832320) Apr 8 21:46:00.315: INFO: Unable to read jessie_udp@dns-test-service.dns-1996 from pod dns-1996/dns-test-ac04f22f-de86-4e06-9758-57c0cc832320: the server could not find the requested resource (get pods dns-test-ac04f22f-de86-4e06-9758-57c0cc832320) Apr 8 21:46:00.318: INFO: Unable to read jessie_tcp@dns-test-service.dns-1996 from pod dns-1996/dns-test-ac04f22f-de86-4e06-9758-57c0cc832320: the server could not find the requested resource (get pods dns-test-ac04f22f-de86-4e06-9758-57c0cc832320) Apr 8 21:46:00.320: INFO: Unable to read jessie_udp@dns-test-service.dns-1996.svc from pod dns-1996/dns-test-ac04f22f-de86-4e06-9758-57c0cc832320: the server could not find the requested resource (get pods dns-test-ac04f22f-de86-4e06-9758-57c0cc832320) Apr 8 21:46:00.323: INFO: Unable to read jessie_tcp@dns-test-service.dns-1996.svc from pod dns-1996/dns-test-ac04f22f-de86-4e06-9758-57c0cc832320: the server could not find the requested resource (get pods dns-test-ac04f22f-de86-4e06-9758-57c0cc832320) Apr 8 21:46:00.326: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1996.svc from pod dns-1996/dns-test-ac04f22f-de86-4e06-9758-57c0cc832320: the server could not find the requested resource (get pods dns-test-ac04f22f-de86-4e06-9758-57c0cc832320) Apr 8 21:46:00.329: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1996.svc from pod dns-1996/dns-test-ac04f22f-de86-4e06-9758-57c0cc832320: the server could not find the requested resource (get pods dns-test-ac04f22f-de86-4e06-9758-57c0cc832320) Apr 8 21:46:00.348: INFO: Lookups using dns-1996/dns-test-ac04f22f-de86-4e06-9758-57c0cc832320 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-1996 wheezy_tcp@dns-test-service.dns-1996 wheezy_udp@dns-test-service.dns-1996.svc wheezy_tcp@dns-test-service.dns-1996.svc wheezy_udp@_http._tcp.dns-test-service.dns-1996.svc wheezy_tcp@_http._tcp.dns-test-service.dns-1996.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-1996 jessie_tcp@dns-test-service.dns-1996 jessie_udp@dns-test-service.dns-1996.svc jessie_tcp@dns-test-service.dns-1996.svc jessie_udp@_http._tcp.dns-test-service.dns-1996.svc jessie_tcp@_http._tcp.dns-test-service.dns-1996.svc] Apr 8 21:46:05.353: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-1996/dns-test-ac04f22f-de86-4e06-9758-57c0cc832320: the server could not find the requested resource (get pods dns-test-ac04f22f-de86-4e06-9758-57c0cc832320) Apr 8 21:46:05.357: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-1996/dns-test-ac04f22f-de86-4e06-9758-57c0cc832320: the server could not find the requested resource (get pods dns-test-ac04f22f-de86-4e06-9758-57c0cc832320) Apr 8 21:46:05.360: INFO: Unable to read wheezy_udp@dns-test-service.dns-1996 from pod dns-1996/dns-test-ac04f22f-de86-4e06-9758-57c0cc832320: the server could not find the requested resource (get pods dns-test-ac04f22f-de86-4e06-9758-57c0cc832320) Apr 8 21:46:05.363: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1996 from pod dns-1996/dns-test-ac04f22f-de86-4e06-9758-57c0cc832320: the server could not find the requested resource (get pods dns-test-ac04f22f-de86-4e06-9758-57c0cc832320) Apr 8 21:46:05.366: INFO: Unable to read wheezy_udp@dns-test-service.dns-1996.svc from pod dns-1996/dns-test-ac04f22f-de86-4e06-9758-57c0cc832320: the server could not find the requested resource (get pods dns-test-ac04f22f-de86-4e06-9758-57c0cc832320) Apr 8 21:46:05.368: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1996.svc from pod dns-1996/dns-test-ac04f22f-de86-4e06-9758-57c0cc832320: the server could not find the requested resource (get pods dns-test-ac04f22f-de86-4e06-9758-57c0cc832320) Apr 8 21:46:05.371: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1996.svc from pod dns-1996/dns-test-ac04f22f-de86-4e06-9758-57c0cc832320: the server could not find the requested resource (get pods dns-test-ac04f22f-de86-4e06-9758-57c0cc832320) Apr 8 21:46:05.374: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1996.svc from pod dns-1996/dns-test-ac04f22f-de86-4e06-9758-57c0cc832320: the server could not find the requested resource (get pods dns-test-ac04f22f-de86-4e06-9758-57c0cc832320) Apr 8 21:46:05.396: INFO: Unable to read jessie_udp@dns-test-service from pod dns-1996/dns-test-ac04f22f-de86-4e06-9758-57c0cc832320: the server could not find the requested resource (get pods dns-test-ac04f22f-de86-4e06-9758-57c0cc832320) Apr 8 21:46:05.399: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-1996/dns-test-ac04f22f-de86-4e06-9758-57c0cc832320: the server could not find the requested resource (get pods dns-test-ac04f22f-de86-4e06-9758-57c0cc832320) Apr 8 21:46:05.402: INFO: Unable to read jessie_udp@dns-test-service.dns-1996 from pod dns-1996/dns-test-ac04f22f-de86-4e06-9758-57c0cc832320: the server could not find the requested resource (get pods dns-test-ac04f22f-de86-4e06-9758-57c0cc832320) Apr 8 21:46:05.404: INFO: Unable to read jessie_tcp@dns-test-service.dns-1996 from pod dns-1996/dns-test-ac04f22f-de86-4e06-9758-57c0cc832320: the server could not find the requested resource (get pods dns-test-ac04f22f-de86-4e06-9758-57c0cc832320) Apr 8 21:46:05.407: INFO: Unable to read jessie_udp@dns-test-service.dns-1996.svc from pod dns-1996/dns-test-ac04f22f-de86-4e06-9758-57c0cc832320: the server could not find the requested resource (get pods dns-test-ac04f22f-de86-4e06-9758-57c0cc832320) Apr 8 21:46:05.409: INFO: Unable to read jessie_tcp@dns-test-service.dns-1996.svc from pod dns-1996/dns-test-ac04f22f-de86-4e06-9758-57c0cc832320: the server could not find the requested resource (get pods dns-test-ac04f22f-de86-4e06-9758-57c0cc832320) Apr 8 21:46:05.411: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1996.svc from pod dns-1996/dns-test-ac04f22f-de86-4e06-9758-57c0cc832320: the server could not find the requested resource (get pods dns-test-ac04f22f-de86-4e06-9758-57c0cc832320) Apr 8 21:46:05.414: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1996.svc from pod dns-1996/dns-test-ac04f22f-de86-4e06-9758-57c0cc832320: the server could not find the requested resource (get pods dns-test-ac04f22f-de86-4e06-9758-57c0cc832320) Apr 8 21:46:05.429: INFO: Lookups using dns-1996/dns-test-ac04f22f-de86-4e06-9758-57c0cc832320 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-1996 wheezy_tcp@dns-test-service.dns-1996 wheezy_udp@dns-test-service.dns-1996.svc wheezy_tcp@dns-test-service.dns-1996.svc wheezy_udp@_http._tcp.dns-test-service.dns-1996.svc wheezy_tcp@_http._tcp.dns-test-service.dns-1996.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-1996 jessie_tcp@dns-test-service.dns-1996 jessie_udp@dns-test-service.dns-1996.svc jessie_tcp@dns-test-service.dns-1996.svc jessie_udp@_http._tcp.dns-test-service.dns-1996.svc jessie_tcp@_http._tcp.dns-test-service.dns-1996.svc] Apr 8 21:46:10.353: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-1996/dns-test-ac04f22f-de86-4e06-9758-57c0cc832320: the server could not find the requested resource (get pods dns-test-ac04f22f-de86-4e06-9758-57c0cc832320) Apr 8 21:46:10.357: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-1996/dns-test-ac04f22f-de86-4e06-9758-57c0cc832320: the server could not find the requested resource (get pods dns-test-ac04f22f-de86-4e06-9758-57c0cc832320) Apr 8 21:46:10.360: INFO: Unable to read wheezy_udp@dns-test-service.dns-1996 from pod dns-1996/dns-test-ac04f22f-de86-4e06-9758-57c0cc832320: the server could not find the requested resource (get pods dns-test-ac04f22f-de86-4e06-9758-57c0cc832320) Apr 8 21:46:10.364: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1996 from pod dns-1996/dns-test-ac04f22f-de86-4e06-9758-57c0cc832320: the server could not find the requested resource (get pods dns-test-ac04f22f-de86-4e06-9758-57c0cc832320) Apr 8 21:46:10.375: INFO: Unable to read wheezy_udp@dns-test-service.dns-1996.svc from pod dns-1996/dns-test-ac04f22f-de86-4e06-9758-57c0cc832320: the server could not find the requested resource (get pods dns-test-ac04f22f-de86-4e06-9758-57c0cc832320) Apr 8 21:46:10.380: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1996.svc from pod dns-1996/dns-test-ac04f22f-de86-4e06-9758-57c0cc832320: the server could not find the requested resource (get pods dns-test-ac04f22f-de86-4e06-9758-57c0cc832320) Apr 8 21:46:10.386: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1996.svc from pod dns-1996/dns-test-ac04f22f-de86-4e06-9758-57c0cc832320: the server could not find the requested resource (get pods dns-test-ac04f22f-de86-4e06-9758-57c0cc832320) Apr 8 21:46:10.389: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1996.svc from pod dns-1996/dns-test-ac04f22f-de86-4e06-9758-57c0cc832320: the server could not find the requested resource (get pods dns-test-ac04f22f-de86-4e06-9758-57c0cc832320) Apr 8 21:46:10.459: INFO: Unable to read jessie_udp@dns-test-service from pod dns-1996/dns-test-ac04f22f-de86-4e06-9758-57c0cc832320: the server could not find the requested resource (get pods dns-test-ac04f22f-de86-4e06-9758-57c0cc832320) Apr 8 21:46:10.462: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-1996/dns-test-ac04f22f-de86-4e06-9758-57c0cc832320: the server could not find the requested resource (get pods dns-test-ac04f22f-de86-4e06-9758-57c0cc832320) Apr 8 21:46:10.464: INFO: Unable to read jessie_udp@dns-test-service.dns-1996 from pod dns-1996/dns-test-ac04f22f-de86-4e06-9758-57c0cc832320: the server could not find the requested resource (get pods dns-test-ac04f22f-de86-4e06-9758-57c0cc832320) Apr 8 21:46:10.467: INFO: Unable to read jessie_tcp@dns-test-service.dns-1996 from pod dns-1996/dns-test-ac04f22f-de86-4e06-9758-57c0cc832320: the server could not find the requested resource (get pods dns-test-ac04f22f-de86-4e06-9758-57c0cc832320) Apr 8 21:46:10.470: INFO: Unable to read jessie_udp@dns-test-service.dns-1996.svc from pod dns-1996/dns-test-ac04f22f-de86-4e06-9758-57c0cc832320: the server could not find the requested resource (get pods dns-test-ac04f22f-de86-4e06-9758-57c0cc832320) Apr 8 21:46:10.472: INFO: Unable to read jessie_tcp@dns-test-service.dns-1996.svc from pod dns-1996/dns-test-ac04f22f-de86-4e06-9758-57c0cc832320: the server could not find the requested resource (get pods dns-test-ac04f22f-de86-4e06-9758-57c0cc832320) Apr 8 21:46:10.475: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1996.svc from pod dns-1996/dns-test-ac04f22f-de86-4e06-9758-57c0cc832320: the server could not find the requested resource (get pods dns-test-ac04f22f-de86-4e06-9758-57c0cc832320) Apr 8 21:46:10.478: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1996.svc from pod dns-1996/dns-test-ac04f22f-de86-4e06-9758-57c0cc832320: the server could not find the requested resource (get pods dns-test-ac04f22f-de86-4e06-9758-57c0cc832320) Apr 8 21:46:10.496: INFO: Lookups using dns-1996/dns-test-ac04f22f-de86-4e06-9758-57c0cc832320 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-1996 wheezy_tcp@dns-test-service.dns-1996 wheezy_udp@dns-test-service.dns-1996.svc wheezy_tcp@dns-test-service.dns-1996.svc wheezy_udp@_http._tcp.dns-test-service.dns-1996.svc wheezy_tcp@_http._tcp.dns-test-service.dns-1996.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-1996 jessie_tcp@dns-test-service.dns-1996 jessie_udp@dns-test-service.dns-1996.svc jessie_tcp@dns-test-service.dns-1996.svc jessie_udp@_http._tcp.dns-test-service.dns-1996.svc jessie_tcp@_http._tcp.dns-test-service.dns-1996.svc] Apr 8 21:46:15.355: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-1996/dns-test-ac04f22f-de86-4e06-9758-57c0cc832320: the server could not find the requested resource (get pods dns-test-ac04f22f-de86-4e06-9758-57c0cc832320) Apr 8 21:46:15.358: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-1996/dns-test-ac04f22f-de86-4e06-9758-57c0cc832320: the server could not find the requested resource (get pods dns-test-ac04f22f-de86-4e06-9758-57c0cc832320) Apr 8 21:46:15.361: INFO: Unable to read wheezy_udp@dns-test-service.dns-1996 from pod dns-1996/dns-test-ac04f22f-de86-4e06-9758-57c0cc832320: the server could not find the requested resource (get pods dns-test-ac04f22f-de86-4e06-9758-57c0cc832320) Apr 8 21:46:15.363: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1996 from pod dns-1996/dns-test-ac04f22f-de86-4e06-9758-57c0cc832320: the server could not find the requested resource (get pods dns-test-ac04f22f-de86-4e06-9758-57c0cc832320) Apr 8 21:46:15.367: INFO: Unable to read wheezy_udp@dns-test-service.dns-1996.svc from pod dns-1996/dns-test-ac04f22f-de86-4e06-9758-57c0cc832320: the server could not find the requested resource (get pods dns-test-ac04f22f-de86-4e06-9758-57c0cc832320) Apr 8 21:46:15.370: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1996.svc from pod dns-1996/dns-test-ac04f22f-de86-4e06-9758-57c0cc832320: the server could not find the requested resource (get pods dns-test-ac04f22f-de86-4e06-9758-57c0cc832320) Apr 8 21:46:15.372: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1996.svc from pod dns-1996/dns-test-ac04f22f-de86-4e06-9758-57c0cc832320: the server could not find the requested resource (get pods dns-test-ac04f22f-de86-4e06-9758-57c0cc832320) Apr 8 21:46:15.375: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1996.svc from pod dns-1996/dns-test-ac04f22f-de86-4e06-9758-57c0cc832320: the server could not find the requested resource (get pods dns-test-ac04f22f-de86-4e06-9758-57c0cc832320) Apr 8 21:46:15.399: INFO: Unable to read jessie_udp@dns-test-service from pod dns-1996/dns-test-ac04f22f-de86-4e06-9758-57c0cc832320: the server could not find the requested resource (get pods dns-test-ac04f22f-de86-4e06-9758-57c0cc832320) Apr 8 21:46:15.402: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-1996/dns-test-ac04f22f-de86-4e06-9758-57c0cc832320: the server could not find the requested resource (get pods dns-test-ac04f22f-de86-4e06-9758-57c0cc832320) Apr 8 21:46:15.405: INFO: Unable to read jessie_udp@dns-test-service.dns-1996 from pod dns-1996/dns-test-ac04f22f-de86-4e06-9758-57c0cc832320: the server could not find the requested resource (get pods dns-test-ac04f22f-de86-4e06-9758-57c0cc832320) Apr 8 21:46:15.408: INFO: Unable to read jessie_tcp@dns-test-service.dns-1996 from pod dns-1996/dns-test-ac04f22f-de86-4e06-9758-57c0cc832320: the server could not find the requested resource (get pods dns-test-ac04f22f-de86-4e06-9758-57c0cc832320) Apr 8 21:46:15.410: INFO: Unable to read jessie_udp@dns-test-service.dns-1996.svc from pod dns-1996/dns-test-ac04f22f-de86-4e06-9758-57c0cc832320: the server could not find the requested resource (get pods dns-test-ac04f22f-de86-4e06-9758-57c0cc832320) Apr 8 21:46:15.413: INFO: Unable to read jessie_tcp@dns-test-service.dns-1996.svc from pod dns-1996/dns-test-ac04f22f-de86-4e06-9758-57c0cc832320: the server could not find the requested resource (get pods dns-test-ac04f22f-de86-4e06-9758-57c0cc832320) Apr 8 21:46:15.416: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1996.svc from pod dns-1996/dns-test-ac04f22f-de86-4e06-9758-57c0cc832320: the server could not find the requested resource (get pods dns-test-ac04f22f-de86-4e06-9758-57c0cc832320) Apr 8 21:46:15.419: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1996.svc from pod dns-1996/dns-test-ac04f22f-de86-4e06-9758-57c0cc832320: the server could not find the requested resource (get pods dns-test-ac04f22f-de86-4e06-9758-57c0cc832320) Apr 8 21:46:15.435: INFO: Lookups using dns-1996/dns-test-ac04f22f-de86-4e06-9758-57c0cc832320 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-1996 wheezy_tcp@dns-test-service.dns-1996 wheezy_udp@dns-test-service.dns-1996.svc wheezy_tcp@dns-test-service.dns-1996.svc wheezy_udp@_http._tcp.dns-test-service.dns-1996.svc wheezy_tcp@_http._tcp.dns-test-service.dns-1996.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-1996 jessie_tcp@dns-test-service.dns-1996 jessie_udp@dns-test-service.dns-1996.svc jessie_tcp@dns-test-service.dns-1996.svc jessie_udp@_http._tcp.dns-test-service.dns-1996.svc jessie_tcp@_http._tcp.dns-test-service.dns-1996.svc] Apr 8 21:46:20.352: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-1996/dns-test-ac04f22f-de86-4e06-9758-57c0cc832320: the server could not find the requested resource (get pods dns-test-ac04f22f-de86-4e06-9758-57c0cc832320) Apr 8 21:46:20.358: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-1996/dns-test-ac04f22f-de86-4e06-9758-57c0cc832320: the server could not find the requested resource (get pods dns-test-ac04f22f-de86-4e06-9758-57c0cc832320) Apr 8 21:46:20.361: INFO: Unable to read wheezy_udp@dns-test-service.dns-1996 from pod dns-1996/dns-test-ac04f22f-de86-4e06-9758-57c0cc832320: the server could not find the requested resource (get pods dns-test-ac04f22f-de86-4e06-9758-57c0cc832320) Apr 8 21:46:20.364: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1996 from pod dns-1996/dns-test-ac04f22f-de86-4e06-9758-57c0cc832320: the server could not find the requested resource (get pods dns-test-ac04f22f-de86-4e06-9758-57c0cc832320) Apr 8 21:46:20.366: INFO: Unable to read wheezy_udp@dns-test-service.dns-1996.svc from pod dns-1996/dns-test-ac04f22f-de86-4e06-9758-57c0cc832320: the server could not find the requested resource (get pods dns-test-ac04f22f-de86-4e06-9758-57c0cc832320) Apr 8 21:46:20.369: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1996.svc from pod dns-1996/dns-test-ac04f22f-de86-4e06-9758-57c0cc832320: the server could not find the requested resource (get pods dns-test-ac04f22f-de86-4e06-9758-57c0cc832320) Apr 8 21:46:20.372: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1996.svc from pod dns-1996/dns-test-ac04f22f-de86-4e06-9758-57c0cc832320: the server could not find the requested resource (get pods dns-test-ac04f22f-de86-4e06-9758-57c0cc832320) Apr 8 21:46:20.374: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1996.svc from pod dns-1996/dns-test-ac04f22f-de86-4e06-9758-57c0cc832320: the server could not find the requested resource (get pods dns-test-ac04f22f-de86-4e06-9758-57c0cc832320) Apr 8 21:46:20.406: INFO: Unable to read jessie_udp@dns-test-service from pod dns-1996/dns-test-ac04f22f-de86-4e06-9758-57c0cc832320: the server could not find the requested resource (get pods dns-test-ac04f22f-de86-4e06-9758-57c0cc832320) Apr 8 21:46:20.408: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-1996/dns-test-ac04f22f-de86-4e06-9758-57c0cc832320: the server could not find the requested resource (get pods dns-test-ac04f22f-de86-4e06-9758-57c0cc832320) Apr 8 21:46:20.410: INFO: Unable to read jessie_udp@dns-test-service.dns-1996 from pod dns-1996/dns-test-ac04f22f-de86-4e06-9758-57c0cc832320: the server could not find the requested resource (get pods dns-test-ac04f22f-de86-4e06-9758-57c0cc832320) Apr 8 21:46:20.413: INFO: Unable to read jessie_tcp@dns-test-service.dns-1996 from pod dns-1996/dns-test-ac04f22f-de86-4e06-9758-57c0cc832320: the server could not find the requested resource (get pods dns-test-ac04f22f-de86-4e06-9758-57c0cc832320) Apr 8 21:46:20.415: INFO: Unable to read jessie_udp@dns-test-service.dns-1996.svc from pod dns-1996/dns-test-ac04f22f-de86-4e06-9758-57c0cc832320: the server could not find the requested resource (get pods dns-test-ac04f22f-de86-4e06-9758-57c0cc832320) Apr 8 21:46:20.418: INFO: Unable to read jessie_tcp@dns-test-service.dns-1996.svc from pod dns-1996/dns-test-ac04f22f-de86-4e06-9758-57c0cc832320: the server could not find the requested resource (get pods dns-test-ac04f22f-de86-4e06-9758-57c0cc832320) Apr 8 21:46:20.420: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1996.svc from pod dns-1996/dns-test-ac04f22f-de86-4e06-9758-57c0cc832320: the server could not find the requested resource (get pods dns-test-ac04f22f-de86-4e06-9758-57c0cc832320) Apr 8 21:46:20.423: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1996.svc from pod dns-1996/dns-test-ac04f22f-de86-4e06-9758-57c0cc832320: the server could not find the requested resource (get pods dns-test-ac04f22f-de86-4e06-9758-57c0cc832320) Apr 8 21:46:20.439: INFO: Lookups using dns-1996/dns-test-ac04f22f-de86-4e06-9758-57c0cc832320 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-1996 wheezy_tcp@dns-test-service.dns-1996 wheezy_udp@dns-test-service.dns-1996.svc wheezy_tcp@dns-test-service.dns-1996.svc wheezy_udp@_http._tcp.dns-test-service.dns-1996.svc wheezy_tcp@_http._tcp.dns-test-service.dns-1996.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-1996 jessie_tcp@dns-test-service.dns-1996 jessie_udp@dns-test-service.dns-1996.svc jessie_tcp@dns-test-service.dns-1996.svc jessie_udp@_http._tcp.dns-test-service.dns-1996.svc jessie_tcp@_http._tcp.dns-test-service.dns-1996.svc] Apr 8 21:46:25.353: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-1996/dns-test-ac04f22f-de86-4e06-9758-57c0cc832320: the server could not find the requested resource (get pods dns-test-ac04f22f-de86-4e06-9758-57c0cc832320) Apr 8 21:46:25.356: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-1996/dns-test-ac04f22f-de86-4e06-9758-57c0cc832320: the server could not find the requested resource (get pods dns-test-ac04f22f-de86-4e06-9758-57c0cc832320) Apr 8 21:46:25.359: INFO: Unable to read wheezy_udp@dns-test-service.dns-1996 from pod dns-1996/dns-test-ac04f22f-de86-4e06-9758-57c0cc832320: the server could not find the requested resource (get pods dns-test-ac04f22f-de86-4e06-9758-57c0cc832320) Apr 8 21:46:25.362: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1996 from pod dns-1996/dns-test-ac04f22f-de86-4e06-9758-57c0cc832320: the server could not find the requested resource (get pods dns-test-ac04f22f-de86-4e06-9758-57c0cc832320) Apr 8 21:46:25.364: INFO: Unable to read wheezy_udp@dns-test-service.dns-1996.svc from pod dns-1996/dns-test-ac04f22f-de86-4e06-9758-57c0cc832320: the server could not find the requested resource (get pods dns-test-ac04f22f-de86-4e06-9758-57c0cc832320) Apr 8 21:46:25.367: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1996.svc from pod dns-1996/dns-test-ac04f22f-de86-4e06-9758-57c0cc832320: the server could not find the requested resource (get pods dns-test-ac04f22f-de86-4e06-9758-57c0cc832320) Apr 8 21:46:25.370: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1996.svc from pod dns-1996/dns-test-ac04f22f-de86-4e06-9758-57c0cc832320: the server could not find the requested resource (get pods dns-test-ac04f22f-de86-4e06-9758-57c0cc832320) Apr 8 21:46:25.373: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1996.svc from pod dns-1996/dns-test-ac04f22f-de86-4e06-9758-57c0cc832320: the server could not find the requested resource (get pods dns-test-ac04f22f-de86-4e06-9758-57c0cc832320) Apr 8 21:46:25.396: INFO: Unable to read jessie_udp@dns-test-service from pod dns-1996/dns-test-ac04f22f-de86-4e06-9758-57c0cc832320: the server could not find the requested resource (get pods dns-test-ac04f22f-de86-4e06-9758-57c0cc832320) Apr 8 21:46:25.399: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-1996/dns-test-ac04f22f-de86-4e06-9758-57c0cc832320: the server could not find the requested resource (get pods dns-test-ac04f22f-de86-4e06-9758-57c0cc832320) Apr 8 21:46:25.402: INFO: Unable to read jessie_udp@dns-test-service.dns-1996 from pod dns-1996/dns-test-ac04f22f-de86-4e06-9758-57c0cc832320: the server could not find the requested resource (get pods dns-test-ac04f22f-de86-4e06-9758-57c0cc832320) Apr 8 21:46:25.405: INFO: Unable to read jessie_tcp@dns-test-service.dns-1996 from pod dns-1996/dns-test-ac04f22f-de86-4e06-9758-57c0cc832320: the server could not find the requested resource (get pods dns-test-ac04f22f-de86-4e06-9758-57c0cc832320) Apr 8 21:46:25.408: INFO: Unable to read jessie_udp@dns-test-service.dns-1996.svc from pod dns-1996/dns-test-ac04f22f-de86-4e06-9758-57c0cc832320: the server could not find the requested resource (get pods dns-test-ac04f22f-de86-4e06-9758-57c0cc832320) Apr 8 21:46:25.411: INFO: Unable to read jessie_tcp@dns-test-service.dns-1996.svc from pod dns-1996/dns-test-ac04f22f-de86-4e06-9758-57c0cc832320: the server could not find the requested resource (get pods dns-test-ac04f22f-de86-4e06-9758-57c0cc832320) Apr 8 21:46:25.414: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1996.svc from pod dns-1996/dns-test-ac04f22f-de86-4e06-9758-57c0cc832320: the server could not find the requested resource (get pods dns-test-ac04f22f-de86-4e06-9758-57c0cc832320) Apr 8 21:46:25.417: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1996.svc from pod dns-1996/dns-test-ac04f22f-de86-4e06-9758-57c0cc832320: the server could not find the requested resource (get pods dns-test-ac04f22f-de86-4e06-9758-57c0cc832320) Apr 8 21:46:25.434: INFO: Lookups using dns-1996/dns-test-ac04f22f-de86-4e06-9758-57c0cc832320 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-1996 wheezy_tcp@dns-test-service.dns-1996 wheezy_udp@dns-test-service.dns-1996.svc wheezy_tcp@dns-test-service.dns-1996.svc wheezy_udp@_http._tcp.dns-test-service.dns-1996.svc wheezy_tcp@_http._tcp.dns-test-service.dns-1996.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-1996 jessie_tcp@dns-test-service.dns-1996 jessie_udp@dns-test-service.dns-1996.svc jessie_tcp@dns-test-service.dns-1996.svc jessie_udp@_http._tcp.dns-test-service.dns-1996.svc jessie_tcp@_http._tcp.dns-test-service.dns-1996.svc] Apr 8 21:46:30.437: INFO: DNS probes using dns-1996/dns-test-ac04f22f-de86-4e06-9758-57c0cc832320 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:46:31.003: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1996" for this suite. • [SLOW TEST:37.029 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":278,"completed":150,"skipped":2300,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:46:31.061: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation Apr 8 21:46:31.215: INFO: >>> kubeConfig: /root/.kube/config Apr 8 21:46:33.159: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:46:43.714: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2180" for this suite. • [SLOW TEST:12.660 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":278,"completed":151,"skipped":2315,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:46:43.721: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test substitution in container's args Apr 8 21:46:43.771: INFO: Waiting up to 5m0s for pod "var-expansion-74fd8c3b-5b5f-4817-9310-502001a503a1" in namespace "var-expansion-7194" to be "success or failure" Apr 8 21:46:43.784: INFO: Pod "var-expansion-74fd8c3b-5b5f-4817-9310-502001a503a1": Phase="Pending", Reason="", readiness=false. Elapsed: 12.800771ms Apr 8 21:46:45.906: INFO: Pod "var-expansion-74fd8c3b-5b5f-4817-9310-502001a503a1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.134754522s Apr 8 21:46:47.910: INFO: Pod "var-expansion-74fd8c3b-5b5f-4817-9310-502001a503a1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.138793299s STEP: Saw pod success Apr 8 21:46:47.910: INFO: Pod "var-expansion-74fd8c3b-5b5f-4817-9310-502001a503a1" satisfied condition "success or failure" Apr 8 21:46:47.913: INFO: Trying to get logs from node jerma-worker2 pod var-expansion-74fd8c3b-5b5f-4817-9310-502001a503a1 container dapi-container: STEP: delete the pod Apr 8 21:46:47.947: INFO: Waiting for pod var-expansion-74fd8c3b-5b5f-4817-9310-502001a503a1 to disappear Apr 8 21:46:47.967: INFO: Pod var-expansion-74fd8c3b-5b5f-4817-9310-502001a503a1 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:46:47.968: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-7194" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":278,"completed":152,"skipped":2338,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:46:47.975: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Apr 8 21:46:48.635: INFO: Pod name wrapped-volume-race-d1d44f27-7db7-4996-aac0-978d6f8401a1: Found 0 pods out of 5 Apr 8 21:46:53.642: INFO: Pod name wrapped-volume-race-d1d44f27-7db7-4996-aac0-978d6f8401a1: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-d1d44f27-7db7-4996-aac0-978d6f8401a1 in namespace emptydir-wrapper-6916, will wait for the garbage collector to delete the pods Apr 8 21:47:05.831: INFO: Deleting ReplicationController wrapped-volume-race-d1d44f27-7db7-4996-aac0-978d6f8401a1 took: 29.09231ms Apr 8 21:47:06.231: INFO: Terminating ReplicationController wrapped-volume-race-d1d44f27-7db7-4996-aac0-978d6f8401a1 pods took: 400.202384ms STEP: Creating RC which spawns configmap-volume pods Apr 8 21:47:19.469: INFO: Pod name wrapped-volume-race-7675914e-9999-413b-b64f-8ed01b1d488d: Found 0 pods out of 5 Apr 8 21:47:24.491: INFO: Pod name wrapped-volume-race-7675914e-9999-413b-b64f-8ed01b1d488d: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-7675914e-9999-413b-b64f-8ed01b1d488d in namespace emptydir-wrapper-6916, will wait for the garbage collector to delete the pods Apr 8 21:47:36.622: INFO: Deleting ReplicationController wrapped-volume-race-7675914e-9999-413b-b64f-8ed01b1d488d took: 14.993795ms Apr 8 21:47:36.922: INFO: Terminating ReplicationController wrapped-volume-race-7675914e-9999-413b-b64f-8ed01b1d488d pods took: 300.283992ms STEP: Creating RC which spawns configmap-volume pods Apr 8 21:47:49.952: INFO: Pod name wrapped-volume-race-82c0d719-0aa0-4772-a697-12c99d93d817: Found 0 pods out of 5 Apr 8 21:47:54.975: INFO: Pod name wrapped-volume-race-82c0d719-0aa0-4772-a697-12c99d93d817: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-82c0d719-0aa0-4772-a697-12c99d93d817 in namespace emptydir-wrapper-6916, will wait for the garbage collector to delete the pods Apr 8 21:48:09.085: INFO: Deleting ReplicationController wrapped-volume-race-82c0d719-0aa0-4772-a697-12c99d93d817 took: 8.864273ms Apr 8 21:48:11.185: INFO: Terminating ReplicationController wrapped-volume-race-82c0d719-0aa0-4772-a697-12c99d93d817 pods took: 2.100279338s STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:48:21.114: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-6916" for this suite. • [SLOW TEST:93.153 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":278,"completed":153,"skipped":2432,"failed":0} SS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:48:21.128: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 8 21:48:21.258: INFO: (0) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 19.532912ms) Apr 8 21:48:21.260: INFO: (1) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.53343ms) Apr 8 21:48:21.263: INFO: (2) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.501696ms) Apr 8 21:48:21.265: INFO: (3) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.400348ms) Apr 8 21:48:21.267: INFO: (4) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.165373ms) Apr 8 21:48:21.270: INFO: (5) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.331154ms) Apr 8 21:48:21.272: INFO: (6) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.524804ms) Apr 8 21:48:21.275: INFO: (7) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.544859ms) Apr 8 21:48:21.277: INFO: (8) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.618374ms) Apr 8 21:48:21.280: INFO: (9) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.725032ms) Apr 8 21:48:21.309: INFO: (10) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 28.569359ms) Apr 8 21:48:21.312: INFO: (11) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.486398ms) Apr 8 21:48:21.316: INFO: (12) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.612922ms) Apr 8 21:48:21.320: INFO: (13) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.774356ms) Apr 8 21:48:21.323: INFO: (14) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.456736ms) Apr 8 21:48:21.326: INFO: (15) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.851153ms) Apr 8 21:48:21.329: INFO: (16) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.280258ms) Apr 8 21:48:21.333: INFO: (17) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.493233ms) Apr 8 21:48:21.336: INFO: (18) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.435523ms) Apr 8 21:48:21.340: INFO: (19) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.321043ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:48:21.340: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-8064" for this suite. •{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance]","total":278,"completed":154,"skipped":2434,"failed":0} SSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:48:21.348: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-9682 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a new StatefulSet Apr 8 21:48:21.442: INFO: Found 0 stateful pods, waiting for 3 Apr 8 21:48:31.447: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 8 21:48:31.447: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 8 21:48:31.447: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Apr 8 21:48:41.445: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 8 21:48:41.446: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 8 21:48:41.446: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Apr 8 21:48:41.455: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9682 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 8 21:48:44.233: INFO: stderr: "I0408 21:48:44.098161 2819 log.go:172] (0xc000bb4c60) (0xc000a620a0) Create stream\nI0408 21:48:44.098198 2819 log.go:172] (0xc000bb4c60) (0xc000a620a0) Stream added, broadcasting: 1\nI0408 21:48:44.100979 2819 log.go:172] (0xc000bb4c60) Reply frame received for 1\nI0408 21:48:44.101031 2819 log.go:172] (0xc000bb4c60) (0xc000d180a0) Create stream\nI0408 21:48:44.101054 2819 log.go:172] (0xc000bb4c60) (0xc000d180a0) Stream added, broadcasting: 3\nI0408 21:48:44.102435 2819 log.go:172] (0xc000bb4c60) Reply frame received for 3\nI0408 21:48:44.102490 2819 log.go:172] (0xc000bb4c60) (0xc000d18140) Create stream\nI0408 21:48:44.102510 2819 log.go:172] (0xc000bb4c60) (0xc000d18140) Stream added, broadcasting: 5\nI0408 21:48:44.103758 2819 log.go:172] (0xc000bb4c60) Reply frame received for 5\nI0408 21:48:44.193477 2819 log.go:172] (0xc000bb4c60) Data frame received for 5\nI0408 21:48:44.193499 2819 log.go:172] (0xc000d18140) (5) Data frame handling\nI0408 21:48:44.193513 2819 log.go:172] (0xc000d18140) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0408 21:48:44.223542 2819 log.go:172] (0xc000bb4c60) Data frame received for 3\nI0408 21:48:44.223583 2819 log.go:172] (0xc000d180a0) (3) Data frame handling\nI0408 21:48:44.223613 2819 log.go:172] (0xc000d180a0) (3) Data frame sent\nI0408 21:48:44.223635 2819 log.go:172] (0xc000bb4c60) Data frame received for 3\nI0408 21:48:44.223659 2819 log.go:172] (0xc000bb4c60) Data frame received for 5\nI0408 21:48:44.223703 2819 log.go:172] (0xc000d18140) (5) Data frame handling\nI0408 21:48:44.223798 2819 log.go:172] (0xc000d180a0) (3) Data frame handling\nI0408 21:48:44.225979 2819 log.go:172] (0xc000bb4c60) Data frame received for 1\nI0408 21:48:44.226016 2819 log.go:172] (0xc000a620a0) (1) Data frame handling\nI0408 21:48:44.226037 2819 log.go:172] (0xc000a620a0) (1) Data frame sent\nI0408 21:48:44.226059 2819 log.go:172] (0xc000bb4c60) (0xc000a620a0) Stream removed, broadcasting: 1\nI0408 21:48:44.226097 2819 log.go:172] (0xc000bb4c60) Go away received\nI0408 21:48:44.226600 2819 log.go:172] (0xc000bb4c60) (0xc000a620a0) Stream removed, broadcasting: 1\nI0408 21:48:44.226622 2819 log.go:172] (0xc000bb4c60) (0xc000d180a0) Stream removed, broadcasting: 3\nI0408 21:48:44.226635 2819 log.go:172] (0xc000bb4c60) (0xc000d18140) Stream removed, broadcasting: 5\n" Apr 8 21:48:44.233: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 8 21:48:44.233: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Apr 8 21:48:54.266: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Apr 8 21:49:04.322: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9682 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 8 21:49:04.565: INFO: stderr: "I0408 21:49:04.459871 2853 log.go:172] (0xc000ac18c0) (0xc000aac780) Create stream\nI0408 21:49:04.459917 2853 log.go:172] (0xc000ac18c0) (0xc000aac780) Stream added, broadcasting: 1\nI0408 21:49:04.462491 2853 log.go:172] (0xc000ac18c0) Reply frame received for 1\nI0408 21:49:04.462549 2853 log.go:172] (0xc000ac18c0) (0xc0009fc8c0) Create stream\nI0408 21:49:04.462568 2853 log.go:172] (0xc000ac18c0) (0xc0009fc8c0) Stream added, broadcasting: 3\nI0408 21:49:04.463674 2853 log.go:172] (0xc000ac18c0) Reply frame received for 3\nI0408 21:49:04.463694 2853 log.go:172] (0xc000ac18c0) (0xc0009fc960) Create stream\nI0408 21:49:04.463700 2853 log.go:172] (0xc000ac18c0) (0xc0009fc960) Stream added, broadcasting: 5\nI0408 21:49:04.464596 2853 log.go:172] (0xc000ac18c0) Reply frame received for 5\nI0408 21:49:04.557527 2853 log.go:172] (0xc000ac18c0) Data frame received for 3\nI0408 21:49:04.557564 2853 log.go:172] (0xc0009fc8c0) (3) Data frame handling\nI0408 21:49:04.557579 2853 log.go:172] (0xc0009fc8c0) (3) Data frame sent\nI0408 21:49:04.557607 2853 log.go:172] (0xc000ac18c0) Data frame received for 5\nI0408 21:49:04.557656 2853 log.go:172] (0xc0009fc960) (5) Data frame handling\nI0408 21:49:04.557678 2853 log.go:172] (0xc0009fc960) (5) Data frame sent\nI0408 21:49:04.557695 2853 log.go:172] (0xc000ac18c0) Data frame received for 5\nI0408 21:49:04.557711 2853 log.go:172] (0xc0009fc960) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0408 21:49:04.557776 2853 log.go:172] (0xc000ac18c0) Data frame received for 3\nI0408 21:49:04.557812 2853 log.go:172] (0xc0009fc8c0) (3) Data frame handling\nI0408 21:49:04.559347 2853 log.go:172] (0xc000ac18c0) Data frame received for 1\nI0408 21:49:04.559368 2853 log.go:172] (0xc000aac780) (1) Data frame handling\nI0408 21:49:04.559382 2853 log.go:172] (0xc000aac780) (1) Data frame sent\nI0408 21:49:04.559405 2853 log.go:172] (0xc000ac18c0) (0xc000aac780) Stream removed, broadcasting: 1\nI0408 21:49:04.559424 2853 log.go:172] (0xc000ac18c0) Go away received\nI0408 21:49:04.560020 2853 log.go:172] (0xc000ac18c0) (0xc000aac780) Stream removed, broadcasting: 1\nI0408 21:49:04.560047 2853 log.go:172] (0xc000ac18c0) (0xc0009fc8c0) Stream removed, broadcasting: 3\nI0408 21:49:04.560072 2853 log.go:172] (0xc000ac18c0) (0xc0009fc960) Stream removed, broadcasting: 5\n" Apr 8 21:49:04.565: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 8 21:49:04.565: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' STEP: Rolling back to a previous revision Apr 8 21:49:24.586: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9682 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 8 21:49:24.931: INFO: stderr: "I0408 21:49:24.778031 2874 log.go:172] (0xc000a7e160) (0xc0004974a0) Create stream\nI0408 21:49:24.778100 2874 log.go:172] (0xc000a7e160) (0xc0004974a0) Stream added, broadcasting: 1\nI0408 21:49:24.779896 2874 log.go:172] (0xc000a7e160) Reply frame received for 1\nI0408 21:49:24.779960 2874 log.go:172] (0xc000a7e160) (0xc00073ba40) Create stream\nI0408 21:49:24.779982 2874 log.go:172] (0xc000a7e160) (0xc00073ba40) Stream added, broadcasting: 3\nI0408 21:49:24.781002 2874 log.go:172] (0xc000a7e160) Reply frame received for 3\nI0408 21:49:24.781047 2874 log.go:172] (0xc000a7e160) (0xc0008dc000) Create stream\nI0408 21:49:24.781064 2874 log.go:172] (0xc000a7e160) (0xc0008dc000) Stream added, broadcasting: 5\nI0408 21:49:24.782047 2874 log.go:172] (0xc000a7e160) Reply frame received for 5\nI0408 21:49:24.863443 2874 log.go:172] (0xc000a7e160) Data frame received for 5\nI0408 21:49:24.863467 2874 log.go:172] (0xc0008dc000) (5) Data frame handling\nI0408 21:49:24.863483 2874 log.go:172] (0xc0008dc000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0408 21:49:24.917562 2874 log.go:172] (0xc000a7e160) Data frame received for 3\nI0408 21:49:24.917596 2874 log.go:172] (0xc00073ba40) (3) Data frame handling\nI0408 21:49:24.917625 2874 log.go:172] (0xc00073ba40) (3) Data frame sent\nI0408 21:49:24.917634 2874 log.go:172] (0xc000a7e160) Data frame received for 3\nI0408 21:49:24.917644 2874 log.go:172] (0xc00073ba40) (3) Data frame handling\nI0408 21:49:24.917780 2874 log.go:172] (0xc000a7e160) Data frame received for 5\nI0408 21:49:24.917793 2874 log.go:172] (0xc0008dc000) (5) Data frame handling\nI0408 21:49:24.925853 2874 log.go:172] (0xc000a7e160) Data frame received for 1\nI0408 21:49:24.925878 2874 log.go:172] (0xc0004974a0) (1) Data frame handling\nI0408 21:49:24.925888 2874 log.go:172] (0xc0004974a0) (1) Data frame sent\nI0408 21:49:24.925898 2874 log.go:172] (0xc000a7e160) (0xc0004974a0) Stream removed, broadcasting: 1\nI0408 21:49:24.925908 2874 log.go:172] (0xc000a7e160) Go away received\nI0408 21:49:24.926359 2874 log.go:172] (0xc000a7e160) (0xc0004974a0) Stream removed, broadcasting: 1\nI0408 21:49:24.926386 2874 log.go:172] (0xc000a7e160) (0xc00073ba40) Stream removed, broadcasting: 3\nI0408 21:49:24.926398 2874 log.go:172] (0xc000a7e160) (0xc0008dc000) Stream removed, broadcasting: 5\n" Apr 8 21:49:24.931: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 8 21:49:24.931: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 8 21:49:34.960: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Apr 8 21:49:44.988: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9682 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 8 21:49:45.204: INFO: stderr: "I0408 21:49:45.130329 2894 log.go:172] (0xc0000f53f0) (0xc00078e1e0) Create stream\nI0408 21:49:45.130379 2894 log.go:172] (0xc0000f53f0) (0xc00078e1e0) Stream added, broadcasting: 1\nI0408 21:49:45.132322 2894 log.go:172] (0xc0000f53f0) Reply frame received for 1\nI0408 21:49:45.132357 2894 log.go:172] (0xc0000f53f0) (0xc0005b5b80) Create stream\nI0408 21:49:45.132366 2894 log.go:172] (0xc0000f53f0) (0xc0005b5b80) Stream added, broadcasting: 3\nI0408 21:49:45.133495 2894 log.go:172] (0xc0000f53f0) Reply frame received for 3\nI0408 21:49:45.133534 2894 log.go:172] (0xc0000f53f0) (0xc00078e320) Create stream\nI0408 21:49:45.133543 2894 log.go:172] (0xc0000f53f0) (0xc00078e320) Stream added, broadcasting: 5\nI0408 21:49:45.134317 2894 log.go:172] (0xc0000f53f0) Reply frame received for 5\nI0408 21:49:45.197672 2894 log.go:172] (0xc0000f53f0) Data frame received for 3\nI0408 21:49:45.197696 2894 log.go:172] (0xc0005b5b80) (3) Data frame handling\nI0408 21:49:45.197708 2894 log.go:172] (0xc0005b5b80) (3) Data frame sent\nI0408 21:49:45.197719 2894 log.go:172] (0xc0000f53f0) Data frame received for 3\nI0408 21:49:45.197727 2894 log.go:172] (0xc0005b5b80) (3) Data frame handling\nI0408 21:49:45.197747 2894 log.go:172] (0xc0000f53f0) Data frame received for 5\nI0408 21:49:45.197781 2894 log.go:172] (0xc00078e320) (5) Data frame handling\nI0408 21:49:45.197806 2894 log.go:172] (0xc00078e320) (5) Data frame sent\nI0408 21:49:45.197820 2894 log.go:172] (0xc0000f53f0) Data frame received for 5\nI0408 21:49:45.197830 2894 log.go:172] (0xc00078e320) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0408 21:49:45.199371 2894 log.go:172] (0xc0000f53f0) Data frame received for 1\nI0408 21:49:45.199390 2894 log.go:172] (0xc00078e1e0) (1) Data frame handling\nI0408 21:49:45.199405 2894 log.go:172] (0xc00078e1e0) (1) Data frame sent\nI0408 21:49:45.199425 2894 log.go:172] (0xc0000f53f0) (0xc00078e1e0) Stream removed, broadcasting: 1\nI0408 21:49:45.199504 2894 log.go:172] (0xc0000f53f0) Go away received\nI0408 21:49:45.199790 2894 log.go:172] (0xc0000f53f0) (0xc00078e1e0) Stream removed, broadcasting: 1\nI0408 21:49:45.199816 2894 log.go:172] (0xc0000f53f0) (0xc0005b5b80) Stream removed, broadcasting: 3\nI0408 21:49:45.199830 2894 log.go:172] (0xc0000f53f0) (0xc00078e320) Stream removed, broadcasting: 5\n" Apr 8 21:49:45.204: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 8 21:49:45.204: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 8 21:49:55.226: INFO: Waiting for StatefulSet statefulset-9682/ss2 to complete update Apr 8 21:49:55.226: INFO: Waiting for Pod statefulset-9682/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Apr 8 21:49:55.226: INFO: Waiting for Pod statefulset-9682/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Apr 8 21:49:55.226: INFO: Waiting for Pod statefulset-9682/ss2-2 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Apr 8 21:50:05.232: INFO: Waiting for StatefulSet statefulset-9682/ss2 to complete update Apr 8 21:50:05.232: INFO: Waiting for Pod statefulset-9682/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Apr 8 21:50:05.233: INFO: Waiting for Pod statefulset-9682/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Apr 8 21:50:15.233: INFO: Waiting for StatefulSet statefulset-9682/ss2 to complete update Apr 8 21:50:15.233: INFO: Waiting for Pod statefulset-9682/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Apr 8 21:50:25.234: INFO: Deleting all statefulset in ns statefulset-9682 Apr 8 21:50:25.237: INFO: Scaling statefulset ss2 to 0 Apr 8 21:50:35.254: INFO: Waiting for statefulset status.replicas updated to 0 Apr 8 21:50:35.256: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:50:35.267: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-9682" for this suite. • [SLOW TEST:133.942 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":278,"completed":155,"skipped":2441,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:50:35.291: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating all guestbook components Apr 8 21:50:35.351: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-slave labels: app: agnhost role: slave tier: backend spec: ports: - port: 6379 selector: app: agnhost role: slave tier: backend Apr 8 21:50:35.351: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2022' Apr 8 21:50:35.689: INFO: stderr: "" Apr 8 21:50:35.690: INFO: stdout: "service/agnhost-slave created\n" Apr 8 21:50:35.690: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-master labels: app: agnhost role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: master tier: backend Apr 8 21:50:35.690: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2022' Apr 8 21:50:35.996: INFO: stderr: "" Apr 8 21:50:35.996: INFO: stdout: "service/agnhost-master created\n" Apr 8 21:50:35.996: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Apr 8 21:50:35.996: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2022' Apr 8 21:50:36.248: INFO: stderr: "" Apr 8 21:50:36.248: INFO: stdout: "service/frontend created\n" Apr 8 21:50:36.248: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 Apr 8 21:50:36.248: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2022' Apr 8 21:50:36.496: INFO: stderr: "" Apr 8 21:50:36.496: INFO: stdout: "deployment.apps/frontend created\n" Apr 8 21:50:36.497: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-master spec: replicas: 1 selector: matchLabels: app: agnhost role: master tier: backend template: metadata: labels: app: agnhost role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Apr 8 21:50:36.497: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2022' Apr 8 21:50:36.765: INFO: stderr: "" Apr 8 21:50:36.765: INFO: stdout: "deployment.apps/agnhost-master created\n" Apr 8 21:50:36.766: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-slave spec: replicas: 2 selector: matchLabels: app: agnhost role: slave tier: backend template: metadata: labels: app: agnhost role: slave tier: backend spec: containers: - name: slave image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Apr 8 21:50:36.766: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2022' Apr 8 21:50:37.002: INFO: stderr: "" Apr 8 21:50:37.002: INFO: stdout: "deployment.apps/agnhost-slave created\n" STEP: validating guestbook app Apr 8 21:50:37.002: INFO: Waiting for all frontend pods to be Running. Apr 8 21:50:42.053: INFO: Waiting for frontend to serve content. Apr 8 21:50:42.124: INFO: Trying to add a new entry to the guestbook. Apr 8 21:50:42.147: INFO: Verifying that added entry can be retrieved. Apr 8 21:50:42.157: INFO: Failed to get response from guestbook. err: , response: {"data":""} STEP: using delete to clean up resources Apr 8 21:50:47.166: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2022' Apr 8 21:50:47.322: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 8 21:50:47.322: INFO: stdout: "service \"agnhost-slave\" force deleted\n" STEP: using delete to clean up resources Apr 8 21:50:47.322: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2022' Apr 8 21:50:47.478: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 8 21:50:47.478: INFO: stdout: "service \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources Apr 8 21:50:47.478: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2022' Apr 8 21:50:47.593: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 8 21:50:47.593: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Apr 8 21:50:47.593: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2022' Apr 8 21:50:47.695: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 8 21:50:47.695: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Apr 8 21:50:47.695: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2022' Apr 8 21:50:47.804: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 8 21:50:47.804: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources Apr 8 21:50:47.804: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2022' Apr 8 21:50:47.924: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 8 21:50:47.924: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:50:47.924: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2022" for this suite. • [SLOW TEST:12.639 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:380 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":278,"completed":156,"skipped":2462,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:50:47.931: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-9c1242f6-7a62-4cab-b69e-807567831739 STEP: Creating a pod to test consume secrets Apr 8 21:50:48.546: INFO: Waiting up to 5m0s for pod "pod-secrets-6e905c9f-dae3-4a96-bcdd-7750f3fe1caf" in namespace "secrets-2495" to be "success or failure" Apr 8 21:50:48.596: INFO: Pod "pod-secrets-6e905c9f-dae3-4a96-bcdd-7750f3fe1caf": Phase="Pending", Reason="", readiness=false. Elapsed: 49.425344ms Apr 8 21:50:50.670: INFO: Pod "pod-secrets-6e905c9f-dae3-4a96-bcdd-7750f3fe1caf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.123630266s Apr 8 21:50:52.674: INFO: Pod "pod-secrets-6e905c9f-dae3-4a96-bcdd-7750f3fe1caf": Phase="Running", Reason="", readiness=true. Elapsed: 4.127691324s Apr 8 21:50:54.678: INFO: Pod "pod-secrets-6e905c9f-dae3-4a96-bcdd-7750f3fe1caf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.131815493s STEP: Saw pod success Apr 8 21:50:54.678: INFO: Pod "pod-secrets-6e905c9f-dae3-4a96-bcdd-7750f3fe1caf" satisfied condition "success or failure" Apr 8 21:50:54.681: INFO: Trying to get logs from node jerma-worker pod pod-secrets-6e905c9f-dae3-4a96-bcdd-7750f3fe1caf container secret-volume-test: STEP: delete the pod Apr 8 21:50:54.728: INFO: Waiting for pod pod-secrets-6e905c9f-dae3-4a96-bcdd-7750f3fe1caf to disappear Apr 8 21:50:54.737: INFO: Pod pod-secrets-6e905c9f-dae3-4a96-bcdd-7750f3fe1caf no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:50:54.737: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2495" for this suite. STEP: Destroying namespace "secret-namespace-3587" for this suite. • [SLOW TEST:6.819 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":278,"completed":157,"skipped":2470,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:50:54.750: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-dd2da640-250a-4a98-9d3e-60170bfacf46 STEP: Creating a pod to test consume configMaps Apr 8 21:50:54.830: INFO: Waiting up to 5m0s for pod "pod-configmaps-32021936-5af7-4218-bbe1-16b90cb01baa" in namespace "configmap-1756" to be "success or failure" Apr 8 21:50:54.833: INFO: Pod "pod-configmaps-32021936-5af7-4218-bbe1-16b90cb01baa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.493689ms Apr 8 21:50:56.843: INFO: Pod "pod-configmaps-32021936-5af7-4218-bbe1-16b90cb01baa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013089465s Apr 8 21:50:58.847: INFO: Pod "pod-configmaps-32021936-5af7-4218-bbe1-16b90cb01baa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016497117s STEP: Saw pod success Apr 8 21:50:58.847: INFO: Pod "pod-configmaps-32021936-5af7-4218-bbe1-16b90cb01baa" satisfied condition "success or failure" Apr 8 21:50:58.849: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-32021936-5af7-4218-bbe1-16b90cb01baa container configmap-volume-test: STEP: delete the pod Apr 8 21:50:58.929: INFO: Waiting for pod pod-configmaps-32021936-5af7-4218-bbe1-16b90cb01baa to disappear Apr 8 21:50:58.956: INFO: Pod pod-configmaps-32021936-5af7-4218-bbe1-16b90cb01baa no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:50:58.956: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1756" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":158,"skipped":2492,"failed":0} SSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:50:58.962: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 8 21:50:59.113: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-d7a8e9ab-fc66-4db8-acf4-9ea5fa3e3d90" in namespace "security-context-test-9133" to be "success or failure" Apr 8 21:50:59.154: INFO: Pod "busybox-readonly-false-d7a8e9ab-fc66-4db8-acf4-9ea5fa3e3d90": Phase="Pending", Reason="", readiness=false. Elapsed: 40.96725ms Apr 8 21:51:01.179: INFO: Pod "busybox-readonly-false-d7a8e9ab-fc66-4db8-acf4-9ea5fa3e3d90": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065878023s Apr 8 21:51:03.183: INFO: Pod "busybox-readonly-false-d7a8e9ab-fc66-4db8-acf4-9ea5fa3e3d90": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.069957225s Apr 8 21:51:03.183: INFO: Pod "busybox-readonly-false-d7a8e9ab-fc66-4db8-acf4-9ea5fa3e3d90" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:51:03.183: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-9133" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":278,"completed":159,"skipped":2506,"failed":0} SSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:51:03.192: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 8 21:51:03.290: INFO: Create a RollingUpdate DaemonSet Apr 8 21:51:03.293: INFO: Check that daemon pods launch on every node of the cluster Apr 8 21:51:03.311: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 8 21:51:03.327: INFO: Number of nodes with available pods: 0 Apr 8 21:51:03.327: INFO: Node jerma-worker is running more than one daemon pod Apr 8 21:51:04.332: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 8 21:51:04.336: INFO: Number of nodes with available pods: 0 Apr 8 21:51:04.336: INFO: Node jerma-worker is running more than one daemon pod Apr 8 21:51:05.332: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 8 21:51:05.336: INFO: Number of nodes with available pods: 0 Apr 8 21:51:05.336: INFO: Node jerma-worker is running more than one daemon pod Apr 8 21:51:06.333: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 8 21:51:06.336: INFO: Number of nodes with available pods: 1 Apr 8 21:51:06.336: INFO: Node jerma-worker2 is running more than one daemon pod Apr 8 21:51:07.348: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 8 21:51:07.351: INFO: Number of nodes with available pods: 2 Apr 8 21:51:07.351: INFO: Number of running nodes: 2, number of available pods: 2 Apr 8 21:51:07.351: INFO: Update the DaemonSet to trigger a rollout Apr 8 21:51:07.358: INFO: Updating DaemonSet daemon-set Apr 8 21:51:20.371: INFO: Roll back the DaemonSet before rollout is complete Apr 8 21:51:20.377: INFO: Updating DaemonSet daemon-set Apr 8 21:51:20.377: INFO: Make sure DaemonSet rollback is complete Apr 8 21:51:20.467: INFO: Wrong image for pod: daemon-set-8sd4b. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Apr 8 21:51:20.467: INFO: Pod daemon-set-8sd4b is not available Apr 8 21:51:20.469: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 8 21:51:21.485: INFO: Wrong image for pod: daemon-set-8sd4b. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Apr 8 21:51:21.485: INFO: Pod daemon-set-8sd4b is not available Apr 8 21:51:21.489: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 8 21:51:22.478: INFO: Wrong image for pod: daemon-set-8sd4b. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Apr 8 21:51:22.478: INFO: Pod daemon-set-8sd4b is not available Apr 8 21:51:22.480: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 8 21:51:23.473: INFO: Pod daemon-set-jgj28 is not available Apr 8 21:51:23.477: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9841, will wait for the garbage collector to delete the pods Apr 8 21:51:23.540: INFO: Deleting DaemonSet.extensions daemon-set took: 5.829551ms Apr 8 21:51:23.940: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.321368ms Apr 8 21:51:29.243: INFO: Number of nodes with available pods: 0 Apr 8 21:51:29.243: INFO: Number of running nodes: 0, number of available pods: 0 Apr 8 21:51:29.246: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-9841/daemonsets","resourceVersion":"6516650"},"items":null} Apr 8 21:51:29.249: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-9841/pods","resourceVersion":"6516650"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:51:29.275: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-9841" for this suite. • [SLOW TEST:26.091 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":278,"completed":160,"skipped":2516,"failed":0} SSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:51:29.284: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on node default medium Apr 8 21:51:29.354: INFO: Waiting up to 5m0s for pod "pod-5133cfd8-0df4-4d41-8458-9c78b85c8c2e" in namespace "emptydir-3453" to be "success or failure" Apr 8 21:51:29.364: INFO: Pod "pod-5133cfd8-0df4-4d41-8458-9c78b85c8c2e": Phase="Pending", Reason="", readiness=false. Elapsed: 9.893995ms Apr 8 21:51:31.368: INFO: Pod "pod-5133cfd8-0df4-4d41-8458-9c78b85c8c2e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013749727s Apr 8 21:51:33.371: INFO: Pod "pod-5133cfd8-0df4-4d41-8458-9c78b85c8c2e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016919999s STEP: Saw pod success Apr 8 21:51:33.371: INFO: Pod "pod-5133cfd8-0df4-4d41-8458-9c78b85c8c2e" satisfied condition "success or failure" Apr 8 21:51:33.374: INFO: Trying to get logs from node jerma-worker2 pod pod-5133cfd8-0df4-4d41-8458-9c78b85c8c2e container test-container: STEP: delete the pod Apr 8 21:51:33.413: INFO: Waiting for pod pod-5133cfd8-0df4-4d41-8458-9c78b85c8c2e to disappear Apr 8 21:51:33.426: INFO: Pod pod-5133cfd8-0df4-4d41-8458-9c78b85c8c2e no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:51:33.426: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3453" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":161,"skipped":2520,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:51:33.434: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0408 21:52:13.850285 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 8 21:52:13.850: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:52:13.850: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1144" for this suite. • [SLOW TEST:40.423 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":278,"completed":162,"skipped":2565,"failed":0} SSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:52:13.857: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Apr 8 21:52:13.897: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 8 21:52:13.921: INFO: Waiting for terminating namespaces to be deleted... Apr 8 21:52:13.924: INFO: Logging pods the kubelet thinks is on node jerma-worker before test Apr 8 21:52:13.963: INFO: simpletest.rc-t4grm from gc-1144 started at 2020-04-08 21:51:33 +0000 UTC (1 container statuses recorded) Apr 8 21:52:13.963: INFO: Container nginx ready: true, restart count 0 Apr 8 21:52:13.963: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 8 21:52:13.963: INFO: Container kindnet-cni ready: true, restart count 0 Apr 8 21:52:13.963: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 8 21:52:13.963: INFO: Container kube-proxy ready: true, restart count 0 Apr 8 21:52:13.963: INFO: simpletest.rc-b55rr from gc-1144 started at 2020-04-08 21:51:33 +0000 UTC (1 container statuses recorded) Apr 8 21:52:13.963: INFO: Container nginx ready: true, restart count 0 Apr 8 21:52:13.963: INFO: simpletest.rc-rwp8b from gc-1144 started at 2020-04-08 21:51:33 +0000 UTC (1 container statuses recorded) Apr 8 21:52:13.963: INFO: Container nginx ready: true, restart count 0 Apr 8 21:52:13.963: INFO: simpletest.rc-bd9tc from gc-1144 started at 2020-04-08 21:51:33 +0000 UTC (1 container statuses recorded) Apr 8 21:52:13.963: INFO: Container nginx ready: true, restart count 0 Apr 8 21:52:13.963: INFO: simpletest.rc-4bfhm from gc-1144 started at 2020-04-08 21:51:33 +0000 UTC (1 container statuses recorded) Apr 8 21:52:13.963: INFO: Container nginx ready: true, restart count 0 Apr 8 21:52:13.963: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test Apr 8 21:52:13.970: INFO: simpletest.rc-2qhhq from gc-1144 started at 2020-04-08 21:51:33 +0000 UTC (1 container statuses recorded) Apr 8 21:52:13.970: INFO: Container nginx ready: true, restart count 0 Apr 8 21:52:13.970: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 8 21:52:13.970: INFO: Container kindnet-cni ready: true, restart count 0 Apr 8 21:52:13.970: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) Apr 8 21:52:13.970: INFO: Container kube-bench ready: false, restart count 0 Apr 8 21:52:13.970: INFO: simpletest.rc-72cvx from gc-1144 started at 2020-04-08 21:51:33 +0000 UTC (1 container statuses recorded) Apr 8 21:52:13.970: INFO: Container nginx ready: true, restart count 0 Apr 8 21:52:13.970: INFO: simpletest.rc-5w8hw from gc-1144 started at 2020-04-08 21:51:33 +0000 UTC (1 container statuses recorded) Apr 8 21:52:13.970: INFO: Container nginx ready: true, restart count 0 Apr 8 21:52:13.970: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 8 21:52:13.970: INFO: Container kube-proxy ready: true, restart count 0 Apr 8 21:52:13.970: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) Apr 8 21:52:13.970: INFO: Container kube-hunter ready: false, restart count 0 Apr 8 21:52:13.970: INFO: simpletest.rc-fpvbg from gc-1144 started at 2020-04-08 21:51:33 +0000 UTC (1 container statuses recorded) Apr 8 21:52:13.970: INFO: Container nginx ready: true, restart count 0 Apr 8 21:52:13.970: INFO: simpletest.rc-4lwbd from gc-1144 started at 2020-04-08 21:51:33 +0000 UTC (1 container statuses recorded) Apr 8 21:52:13.970: INFO: Container nginx ready: true, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-abf989b2-0dd3-46d6-b1c0-8311c85216f5 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-abf989b2-0dd3-46d6-b1c0-8311c85216f5 off the node jerma-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-abf989b2-0dd3-46d6-b1c0-8311c85216f5 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:52:32.140: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-9910" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:18.291 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":278,"completed":163,"skipped":2569,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:52:32.149: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-7846 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-7846 STEP: Creating statefulset with conflicting port in namespace statefulset-7846 STEP: Waiting until pod test-pod will start running in namespace statefulset-7846 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-7846 Apr 8 21:52:36.266: INFO: Observed stateful pod in namespace: statefulset-7846, name: ss-0, uid: b692a227-b082-4e08-88cb-381b5ae2039c, status phase: Failed. Waiting for statefulset controller to delete. Apr 8 21:52:36.270: INFO: Observed stateful pod in namespace: statefulset-7846, name: ss-0, uid: b692a227-b082-4e08-88cb-381b5ae2039c, status phase: Failed. Waiting for statefulset controller to delete. Apr 8 21:52:36.278: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-7846 STEP: Removing pod with conflicting port in namespace statefulset-7846 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-7846 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Apr 8 21:52:40.420: INFO: Deleting all statefulset in ns statefulset-7846 Apr 8 21:52:40.423: INFO: Scaling statefulset ss to 0 Apr 8 21:52:50.451: INFO: Waiting for statefulset status.replicas updated to 0 Apr 8 21:52:50.454: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:52:50.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-7846" for this suite. • [SLOW TEST:18.326 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":278,"completed":164,"skipped":2604,"failed":0} SSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:52:50.475: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating Agnhost RC Apr 8 21:52:50.550: INFO: namespace kubectl-4331 Apr 8 21:52:50.550: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4331' Apr 8 21:52:50.802: INFO: stderr: "" Apr 8 21:52:50.802: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Apr 8 21:52:51.806: INFO: Selector matched 1 pods for map[app:agnhost] Apr 8 21:52:51.806: INFO: Found 0 / 1 Apr 8 21:52:52.806: INFO: Selector matched 1 pods for map[app:agnhost] Apr 8 21:52:52.806: INFO: Found 0 / 1 Apr 8 21:52:53.806: INFO: Selector matched 1 pods for map[app:agnhost] Apr 8 21:52:53.806: INFO: Found 1 / 1 Apr 8 21:52:53.806: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Apr 8 21:52:53.810: INFO: Selector matched 1 pods for map[app:agnhost] Apr 8 21:52:53.810: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Apr 8 21:52:53.810: INFO: wait on agnhost-master startup in kubectl-4331 Apr 8 21:52:53.810: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs agnhost-master-j6lh4 agnhost-master --namespace=kubectl-4331' Apr 8 21:52:53.926: INFO: stderr: "" Apr 8 21:52:53.926: INFO: stdout: "Paused\n" STEP: exposing RC Apr 8 21:52:53.926: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-4331' Apr 8 21:52:54.055: INFO: stderr: "" Apr 8 21:52:54.055: INFO: stdout: "service/rm2 exposed\n" Apr 8 21:52:54.073: INFO: Service rm2 in namespace kubectl-4331 found. STEP: exposing service Apr 8 21:52:56.087: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-4331' Apr 8 21:52:56.218: INFO: stderr: "" Apr 8 21:52:56.218: INFO: stdout: "service/rm3 exposed\n" Apr 8 21:52:56.229: INFO: Service rm3 in namespace kubectl-4331 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:52:58.237: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4331" for this suite. • [SLOW TEST:7.769 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1188 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":278,"completed":165,"skipped":2615,"failed":0} SSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:52:58.245: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Apr 8 21:53:02.395: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-5803 PodName:pod-sharedvolume-ee76ce99-3fcd-46ce-a145-ff5539269d8f ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 8 21:53:02.395: INFO: >>> kubeConfig: /root/.kube/config I0408 21:53:02.437599 6 log.go:172] (0xc001a6ef20) (0xc000b6d860) Create stream I0408 21:53:02.437637 6 log.go:172] (0xc001a6ef20) (0xc000b6d860) Stream added, broadcasting: 1 I0408 21:53:02.439670 6 log.go:172] (0xc001a6ef20) Reply frame received for 1 I0408 21:53:02.439733 6 log.go:172] (0xc001a6ef20) (0xc000256320) Create stream I0408 21:53:02.439754 6 log.go:172] (0xc001a6ef20) (0xc000256320) Stream added, broadcasting: 3 I0408 21:53:02.440745 6 log.go:172] (0xc001a6ef20) Reply frame received for 3 I0408 21:53:02.440771 6 log.go:172] (0xc001a6ef20) (0xc000257900) Create stream I0408 21:53:02.440783 6 log.go:172] (0xc001a6ef20) (0xc000257900) Stream added, broadcasting: 5 I0408 21:53:02.441927 6 log.go:172] (0xc001a6ef20) Reply frame received for 5 I0408 21:53:02.502046 6 log.go:172] (0xc001a6ef20) Data frame received for 5 I0408 21:53:02.502077 6 log.go:172] (0xc000257900) (5) Data frame handling I0408 21:53:02.502100 6 log.go:172] (0xc001a6ef20) Data frame received for 3 I0408 21:53:02.502110 6 log.go:172] (0xc000256320) (3) Data frame handling I0408 21:53:02.502129 6 log.go:172] (0xc000256320) (3) Data frame sent I0408 21:53:02.502142 6 log.go:172] (0xc001a6ef20) Data frame received for 3 I0408 21:53:02.502154 6 log.go:172] (0xc000256320) (3) Data frame handling I0408 21:53:02.514393 6 log.go:172] (0xc001a6ef20) Data frame received for 1 I0408 21:53:02.514416 6 log.go:172] (0xc000b6d860) (1) Data frame handling I0408 21:53:02.514426 6 log.go:172] (0xc000b6d860) (1) Data frame sent I0408 21:53:02.514434 6 log.go:172] (0xc001a6ef20) (0xc000b6d860) Stream removed, broadcasting: 1 I0408 21:53:02.514443 6 log.go:172] (0xc001a6ef20) Go away received I0408 21:53:02.514567 6 log.go:172] (0xc001a6ef20) (0xc000b6d860) Stream removed, broadcasting: 1 I0408 21:53:02.514589 6 log.go:172] (0xc001a6ef20) (0xc000256320) Stream removed, broadcasting: 3 I0408 21:53:02.514599 6 log.go:172] (0xc001a6ef20) (0xc000257900) Stream removed, broadcasting: 5 Apr 8 21:53:02.514: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:53:02.514: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5803" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":278,"completed":166,"skipped":2619,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:53:02.521: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1585 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Apr 8 21:53:02.589: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-8248' Apr 8 21:53:02.692: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Apr 8 21:53:02.692: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n" STEP: verifying the rc e2e-test-httpd-rc was created Apr 8 21:53:02.716: INFO: Waiting for rc e2e-test-httpd-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 STEP: rolling-update to same image controller Apr 8 21:53:02.723: INFO: scanned /root for discovery docs: Apr 8 21:53:02.724: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-httpd-rc --update-period=1s --image=docker.io/library/httpd:2.4.38-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-8248' Apr 8 21:53:18.647: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Apr 8 21:53:18.647: INFO: stdout: "Created e2e-test-httpd-rc-07bd888b717ff2dd8eb6afdf4947af6c\nScaling up e2e-test-httpd-rc-07bd888b717ff2dd8eb6afdf4947af6c from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-07bd888b717ff2dd8eb6afdf4947af6c up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-07bd888b717ff2dd8eb6afdf4947af6c to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n" Apr 8 21:53:18.647: INFO: stdout: "Created e2e-test-httpd-rc-07bd888b717ff2dd8eb6afdf4947af6c\nScaling up e2e-test-httpd-rc-07bd888b717ff2dd8eb6afdf4947af6c from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-07bd888b717ff2dd8eb6afdf4947af6c up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-07bd888b717ff2dd8eb6afdf4947af6c to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-httpd-rc pods to come up. Apr 8 21:53:18.647: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-httpd-rc --namespace=kubectl-8248' Apr 8 21:53:18.745: INFO: stderr: "" Apr 8 21:53:18.745: INFO: stdout: "e2e-test-httpd-rc-07bd888b717ff2dd8eb6afdf4947af6c-zb2p9 e2e-test-httpd-rc-wwfk8 " STEP: Replicas for run=e2e-test-httpd-rc: expected=1 actual=2 Apr 8 21:53:23.746: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-httpd-rc --namespace=kubectl-8248' Apr 8 21:53:23.844: INFO: stderr: "" Apr 8 21:53:23.844: INFO: stdout: "e2e-test-httpd-rc-07bd888b717ff2dd8eb6afdf4947af6c-zb2p9 " Apr 8 21:53:23.844: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-07bd888b717ff2dd8eb6afdf4947af6c-zb2p9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-httpd-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8248' Apr 8 21:53:23.946: INFO: stderr: "" Apr 8 21:53:23.946: INFO: stdout: "true" Apr 8 21:53:23.947: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-07bd888b717ff2dd8eb6afdf4947af6c-zb2p9 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-httpd-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8248' Apr 8 21:53:24.041: INFO: stderr: "" Apr 8 21:53:24.041: INFO: stdout: "docker.io/library/httpd:2.4.38-alpine" Apr 8 21:53:24.041: INFO: e2e-test-httpd-rc-07bd888b717ff2dd8eb6afdf4947af6c-zb2p9 is verified up and running [AfterEach] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1591 Apr 8 21:53:24.041: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-8248' Apr 8 21:53:24.138: INFO: stderr: "" Apr 8 21:53:24.138: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:53:24.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8248" for this suite. • [SLOW TEST:21.672 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1580 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image [Conformance]","total":278,"completed":167,"skipped":2644,"failed":0} SSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:53:24.194: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 8 21:53:24.261: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Apr 8 21:53:24.268: INFO: Number of nodes with available pods: 0 Apr 8 21:53:24.268: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Apr 8 21:53:24.331: INFO: Number of nodes with available pods: 0 Apr 8 21:53:24.331: INFO: Node jerma-worker2 is running more than one daemon pod Apr 8 21:53:25.336: INFO: Number of nodes with available pods: 0 Apr 8 21:53:25.336: INFO: Node jerma-worker2 is running more than one daemon pod Apr 8 21:53:26.336: INFO: Number of nodes with available pods: 0 Apr 8 21:53:26.336: INFO: Node jerma-worker2 is running more than one daemon pod Apr 8 21:53:27.339: INFO: Number of nodes with available pods: 0 Apr 8 21:53:27.339: INFO: Node jerma-worker2 is running more than one daemon pod Apr 8 21:53:28.336: INFO: Number of nodes with available pods: 1 Apr 8 21:53:28.336: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Apr 8 21:53:28.391: INFO: Number of nodes with available pods: 1 Apr 8 21:53:28.391: INFO: Number of running nodes: 0, number of available pods: 1 Apr 8 21:53:29.415: INFO: Number of nodes with available pods: 0 Apr 8 21:53:29.415: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Apr 8 21:53:29.442: INFO: Number of nodes with available pods: 0 Apr 8 21:53:29.442: INFO: Node jerma-worker2 is running more than one daemon pod Apr 8 21:53:30.467: INFO: Number of nodes with available pods: 0 Apr 8 21:53:30.467: INFO: Node jerma-worker2 is running more than one daemon pod Apr 8 21:53:31.446: INFO: Number of nodes with available pods: 0 Apr 8 21:53:31.446: INFO: Node jerma-worker2 is running more than one daemon pod Apr 8 21:53:32.447: INFO: Number of nodes with available pods: 0 Apr 8 21:53:32.447: INFO: Node jerma-worker2 is running more than one daemon pod Apr 8 21:53:33.447: INFO: Number of nodes with available pods: 0 Apr 8 21:53:33.447: INFO: Node jerma-worker2 is running more than one daemon pod Apr 8 21:53:34.447: INFO: Number of nodes with available pods: 0 Apr 8 21:53:34.447: INFO: Node jerma-worker2 is running more than one daemon pod Apr 8 21:53:35.447: INFO: Number of nodes with available pods: 0 Apr 8 21:53:35.447: INFO: Node jerma-worker2 is running more than one daemon pod Apr 8 21:53:36.446: INFO: Number of nodes with available pods: 0 Apr 8 21:53:36.446: INFO: Node jerma-worker2 is running more than one daemon pod Apr 8 21:53:37.447: INFO: Number of nodes with available pods: 0 Apr 8 21:53:37.447: INFO: Node jerma-worker2 is running more than one daemon pod Apr 8 21:53:38.446: INFO: Number of nodes with available pods: 0 Apr 8 21:53:38.446: INFO: Node jerma-worker2 is running more than one daemon pod Apr 8 21:53:39.447: INFO: Number of nodes with available pods: 0 Apr 8 21:53:39.447: INFO: Node jerma-worker2 is running more than one daemon pod Apr 8 21:53:40.447: INFO: Number of nodes with available pods: 0 Apr 8 21:53:40.447: INFO: Node jerma-worker2 is running more than one daemon pod Apr 8 21:53:41.451: INFO: Number of nodes with available pods: 0 Apr 8 21:53:41.451: INFO: Node jerma-worker2 is running more than one daemon pod Apr 8 21:53:42.447: INFO: Number of nodes with available pods: 0 Apr 8 21:53:42.447: INFO: Node jerma-worker2 is running more than one daemon pod Apr 8 21:53:43.447: INFO: Number of nodes with available pods: 1 Apr 8 21:53:43.447: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4350, will wait for the garbage collector to delete the pods Apr 8 21:53:43.513: INFO: Deleting DaemonSet.extensions daemon-set took: 6.385724ms Apr 8 21:53:43.813: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.263744ms Apr 8 21:53:49.522: INFO: Number of nodes with available pods: 0 Apr 8 21:53:49.523: INFO: Number of running nodes: 0, number of available pods: 0 Apr 8 21:53:49.525: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4350/daemonsets","resourceVersion":"6517718"},"items":null} Apr 8 21:53:49.528: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4350/pods","resourceVersion":"6517718"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:53:49.541: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-4350" for this suite. • [SLOW TEST:25.372 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":278,"completed":168,"skipped":2654,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:53:49.567: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod liveness-b3ebf7e2-e810-42c4-9838-4b8aa45e7e46 in namespace container-probe-695 Apr 8 21:53:53.661: INFO: Started pod liveness-b3ebf7e2-e810-42c4-9838-4b8aa45e7e46 in namespace container-probe-695 STEP: checking the pod's current state and verifying that restartCount is present Apr 8 21:53:53.664: INFO: Initial restart count of pod liveness-b3ebf7e2-e810-42c4-9838-4b8aa45e7e46 is 0 Apr 8 21:54:09.699: INFO: Restart count of pod container-probe-695/liveness-b3ebf7e2-e810-42c4-9838-4b8aa45e7e46 is now 1 (16.035294848s elapsed) Apr 8 21:54:29.742: INFO: Restart count of pod container-probe-695/liveness-b3ebf7e2-e810-42c4-9838-4b8aa45e7e46 is now 2 (36.078053248s elapsed) Apr 8 21:54:49.785: INFO: Restart count of pod container-probe-695/liveness-b3ebf7e2-e810-42c4-9838-4b8aa45e7e46 is now 3 (56.120683388s elapsed) Apr 8 21:55:09.827: INFO: Restart count of pod container-probe-695/liveness-b3ebf7e2-e810-42c4-9838-4b8aa45e7e46 is now 4 (1m16.163039472s elapsed) Apr 8 21:56:11.967: INFO: Restart count of pod container-probe-695/liveness-b3ebf7e2-e810-42c4-9838-4b8aa45e7e46 is now 5 (2m18.30279043s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:56:11.979: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-695" for this suite. • [SLOW TEST:142.428 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":278,"completed":169,"skipped":2670,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:56:11.996: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-upd-c3a988e5-3542-4bfe-b710-d1bd49854916 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:56:16.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8010" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":170,"skipped":2711,"failed":0} SSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:56:16.147: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Apr 8 21:56:16.201: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:56:23.732: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-3932" for this suite. • [SLOW TEST:7.596 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":278,"completed":171,"skipped":2725,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:56:23.742: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-650f7e15-e127-4530-a805-0d2a72cb2888 STEP: Creating a pod to test consume configMaps Apr 8 21:56:23.847: INFO: Waiting up to 5m0s for pod "pod-configmaps-98d97952-4719-4b21-9d26-e859b9f8dfa7" in namespace "configmap-6326" to be "success or failure" Apr 8 21:56:23.870: INFO: Pod "pod-configmaps-98d97952-4719-4b21-9d26-e859b9f8dfa7": Phase="Pending", Reason="", readiness=false. Elapsed: 23.344245ms Apr 8 21:56:25.874: INFO: Pod "pod-configmaps-98d97952-4719-4b21-9d26-e859b9f8dfa7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027215337s Apr 8 21:56:27.878: INFO: Pod "pod-configmaps-98d97952-4719-4b21-9d26-e859b9f8dfa7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031082676s STEP: Saw pod success Apr 8 21:56:27.878: INFO: Pod "pod-configmaps-98d97952-4719-4b21-9d26-e859b9f8dfa7" satisfied condition "success or failure" Apr 8 21:56:27.881: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-98d97952-4719-4b21-9d26-e859b9f8dfa7 container configmap-volume-test: STEP: delete the pod Apr 8 21:56:27.959: INFO: Waiting for pod pod-configmaps-98d97952-4719-4b21-9d26-e859b9f8dfa7 to disappear Apr 8 21:56:27.965: INFO: Pod pod-configmaps-98d97952-4719-4b21-9d26-e859b9f8dfa7 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:56:27.966: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6326" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":172,"skipped":2735,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:56:27.974: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name cm-test-opt-del-80cef421-2813-4dce-b295-a972892bd073 STEP: Creating configMap with name cm-test-opt-upd-bdb306e8-79e7-4e42-aeaa-598e63c6a0e6 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-80cef421-2813-4dce-b295-a972892bd073 STEP: Updating configmap cm-test-opt-upd-bdb306e8-79e7-4e42-aeaa-598e63c6a0e6 STEP: Creating configMap with name cm-test-opt-create-7ad4c8a1-f5f6-4f36-8bc8-67ed50c1bbd6 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:56:36.170: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9575" for this suite. • [SLOW TEST:8.204 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":173,"skipped":2762,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:56:36.178: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-a8b20adb-b2c3-40bc-b1f4-39c53637f250 STEP: Creating a pod to test consume configMaps Apr 8 21:56:36.279: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-eb259bca-691f-41f6-878f-f99c00c5f578" in namespace "projected-421" to be "success or failure" Apr 8 21:56:36.283: INFO: Pod "pod-projected-configmaps-eb259bca-691f-41f6-878f-f99c00c5f578": Phase="Pending", Reason="", readiness=false. Elapsed: 3.379838ms Apr 8 21:56:38.287: INFO: Pod "pod-projected-configmaps-eb259bca-691f-41f6-878f-f99c00c5f578": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007133483s Apr 8 21:56:40.291: INFO: Pod "pod-projected-configmaps-eb259bca-691f-41f6-878f-f99c00c5f578": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011200179s STEP: Saw pod success Apr 8 21:56:40.291: INFO: Pod "pod-projected-configmaps-eb259bca-691f-41f6-878f-f99c00c5f578" satisfied condition "success or failure" Apr 8 21:56:40.294: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-eb259bca-691f-41f6-878f-f99c00c5f578 container projected-configmap-volume-test: STEP: delete the pod Apr 8 21:56:40.359: INFO: Waiting for pod pod-projected-configmaps-eb259bca-691f-41f6-878f-f99c00c5f578 to disappear Apr 8 21:56:40.367: INFO: Pod pod-projected-configmaps-eb259bca-691f-41f6-878f-f99c00c5f578 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:56:40.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-421" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":278,"completed":174,"skipped":2768,"failed":0} SSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:56:40.375: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-e996a6d0-b3a4-4387-bc48-fbb94fb1ae38 STEP: Creating a pod to test consume configMaps Apr 8 21:56:40.453: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-1e73f5dd-8686-4765-a6a7-a3ae0bcca16b" in namespace "projected-6953" to be "success or failure" Apr 8 21:56:40.483: INFO: Pod "pod-projected-configmaps-1e73f5dd-8686-4765-a6a7-a3ae0bcca16b": Phase="Pending", Reason="", readiness=false. Elapsed: 29.841406ms Apr 8 21:56:42.607: INFO: Pod "pod-projected-configmaps-1e73f5dd-8686-4765-a6a7-a3ae0bcca16b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.153666342s Apr 8 21:56:44.612: INFO: Pod "pod-projected-configmaps-1e73f5dd-8686-4765-a6a7-a3ae0bcca16b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.158204401s STEP: Saw pod success Apr 8 21:56:44.612: INFO: Pod "pod-projected-configmaps-1e73f5dd-8686-4765-a6a7-a3ae0bcca16b" satisfied condition "success or failure" Apr 8 21:56:44.615: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-1e73f5dd-8686-4765-a6a7-a3ae0bcca16b container projected-configmap-volume-test: STEP: delete the pod Apr 8 21:56:44.638: INFO: Waiting for pod pod-projected-configmaps-1e73f5dd-8686-4765-a6a7-a3ae0bcca16b to disappear Apr 8 21:56:44.643: INFO: Pod pod-projected-configmaps-1e73f5dd-8686-4765-a6a7-a3ae0bcca16b no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:56:44.643: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6953" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":175,"skipped":2775,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:56:44.653: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 8 21:56:44.716: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6de766a6-8ad6-4779-9c35-157f0479e9ae" in namespace "projected-3693" to be "success or failure" Apr 8 21:56:44.757: INFO: Pod "downwardapi-volume-6de766a6-8ad6-4779-9c35-157f0479e9ae": Phase="Pending", Reason="", readiness=false. Elapsed: 40.501869ms Apr 8 21:56:46.761: INFO: Pod "downwardapi-volume-6de766a6-8ad6-4779-9c35-157f0479e9ae": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044503972s Apr 8 21:56:48.764: INFO: Pod "downwardapi-volume-6de766a6-8ad6-4779-9c35-157f0479e9ae": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.047395178s STEP: Saw pod success Apr 8 21:56:48.764: INFO: Pod "downwardapi-volume-6de766a6-8ad6-4779-9c35-157f0479e9ae" satisfied condition "success or failure" Apr 8 21:56:48.766: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-6de766a6-8ad6-4779-9c35-157f0479e9ae container client-container: STEP: delete the pod Apr 8 21:56:48.788: INFO: Waiting for pod downwardapi-volume-6de766a6-8ad6-4779-9c35-157f0479e9ae to disappear Apr 8 21:56:48.862: INFO: Pod downwardapi-volume-6de766a6-8ad6-4779-9c35-157f0479e9ae no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 21:56:48.862: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3693" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":278,"completed":176,"skipped":2787,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 21:56:48.877: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod test-webserver-c7a00a09-7f4b-47cd-a7a4-1c13eb8d5a46 in namespace container-probe-2756 Apr 8 21:56:52.957: INFO: Started pod test-webserver-c7a00a09-7f4b-47cd-a7a4-1c13eb8d5a46 in namespace container-probe-2756 STEP: checking the pod's current state and verifying that restartCount is present Apr 8 21:56:52.960: INFO: Initial restart count of pod test-webserver-c7a00a09-7f4b-47cd-a7a4-1c13eb8d5a46 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 22:00:53.564: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2756" for this suite. • [SLOW TEST:244.713 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":177,"skipped":2899,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 22:00:53.590: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 8 22:00:53.896: INFO: Creating ReplicaSet my-hostname-basic-e5962e20-0639-480f-84f3-35db70deef27 Apr 8 22:00:53.918: INFO: Pod name my-hostname-basic-e5962e20-0639-480f-84f3-35db70deef27: Found 0 pods out of 1 Apr 8 22:00:58.922: INFO: Pod name my-hostname-basic-e5962e20-0639-480f-84f3-35db70deef27: Found 1 pods out of 1 Apr 8 22:00:58.922: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-e5962e20-0639-480f-84f3-35db70deef27" is running Apr 8 22:00:58.942: INFO: Pod "my-hostname-basic-e5962e20-0639-480f-84f3-35db70deef27-cz6js" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-08 22:00:53 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-08 22:00:56 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-08 22:00:56 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-08 22:00:53 +0000 UTC Reason: Message:}]) Apr 8 22:00:58.942: INFO: Trying to dial the pod Apr 8 22:01:03.952: INFO: Controller my-hostname-basic-e5962e20-0639-480f-84f3-35db70deef27: Got expected result from replica 1 [my-hostname-basic-e5962e20-0639-480f-84f3-35db70deef27-cz6js]: "my-hostname-basic-e5962e20-0639-480f-84f3-35db70deef27-cz6js", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 22:01:03.952: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-3130" for this suite. • [SLOW TEST:10.370 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":278,"completed":178,"skipped":2916,"failed":0} SSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 22:01:03.960: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:172 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating server pod server in namespace prestop-6696 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-6696 STEP: Deleting pre-stop pod Apr 8 22:01:17.099: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 22:01:17.105: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-6696" for this suite. • [SLOW TEST:13.197 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":278,"completed":179,"skipped":2920,"failed":0} SSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 22:01:17.158: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir volume type on tmpfs Apr 8 22:01:17.513: INFO: Waiting up to 5m0s for pod "pod-707f24be-c7a2-42d9-a569-7abaa22b580a" in namespace "emptydir-1353" to be "success or failure" Apr 8 22:01:17.516: INFO: Pod "pod-707f24be-c7a2-42d9-a569-7abaa22b580a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.527614ms Apr 8 22:01:19.520: INFO: Pod "pod-707f24be-c7a2-42d9-a569-7abaa22b580a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006652703s Apr 8 22:01:21.524: INFO: Pod "pod-707f24be-c7a2-42d9-a569-7abaa22b580a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010649158s STEP: Saw pod success Apr 8 22:01:21.524: INFO: Pod "pod-707f24be-c7a2-42d9-a569-7abaa22b580a" satisfied condition "success or failure" Apr 8 22:01:21.527: INFO: Trying to get logs from node jerma-worker2 pod pod-707f24be-c7a2-42d9-a569-7abaa22b580a container test-container: STEP: delete the pod Apr 8 22:01:21.577: INFO: Waiting for pod pod-707f24be-c7a2-42d9-a569-7abaa22b580a to disappear Apr 8 22:01:21.588: INFO: Pod pod-707f24be-c7a2-42d9-a569-7abaa22b580a no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 22:01:21.588: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1353" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":180,"skipped":2926,"failed":0} SS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 22:01:21.596: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: set up a multi version CRD Apr 8 22:01:21.675: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 22:01:34.867: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1024" for this suite. • [SLOW TEST:13.277 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":278,"completed":181,"skipped":2928,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 22:01:34.873: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-downwardapi-c8f7 STEP: Creating a pod to test atomic-volume-subpath Apr 8 22:01:34.984: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-c8f7" in namespace "subpath-6449" to be "success or failure" Apr 8 22:01:34.989: INFO: Pod "pod-subpath-test-downwardapi-c8f7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.33273ms Apr 8 22:01:36.992: INFO: Pod "pod-subpath-test-downwardapi-c8f7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007902989s Apr 8 22:01:39.001: INFO: Pod "pod-subpath-test-downwardapi-c8f7": Phase="Running", Reason="", readiness=true. Elapsed: 4.016977975s Apr 8 22:01:41.004: INFO: Pod "pod-subpath-test-downwardapi-c8f7": Phase="Running", Reason="", readiness=true. Elapsed: 6.019595357s Apr 8 22:01:43.056: INFO: Pod "pod-subpath-test-downwardapi-c8f7": Phase="Running", Reason="", readiness=true. Elapsed: 8.072214057s Apr 8 22:01:45.060: INFO: Pod "pod-subpath-test-downwardapi-c8f7": Phase="Running", Reason="", readiness=true. Elapsed: 10.075863096s Apr 8 22:01:47.092: INFO: Pod "pod-subpath-test-downwardapi-c8f7": Phase="Running", Reason="", readiness=true. Elapsed: 12.108006822s Apr 8 22:01:49.099: INFO: Pod "pod-subpath-test-downwardapi-c8f7": Phase="Running", Reason="", readiness=true. Elapsed: 14.114546579s Apr 8 22:01:51.103: INFO: Pod "pod-subpath-test-downwardapi-c8f7": Phase="Running", Reason="", readiness=true. Elapsed: 16.11833329s Apr 8 22:01:53.106: INFO: Pod "pod-subpath-test-downwardapi-c8f7": Phase="Running", Reason="", readiness=true. Elapsed: 18.121354896s Apr 8 22:01:55.110: INFO: Pod "pod-subpath-test-downwardapi-c8f7": Phase="Running", Reason="", readiness=true. Elapsed: 20.125706628s Apr 8 22:01:57.114: INFO: Pod "pod-subpath-test-downwardapi-c8f7": Phase="Running", Reason="", readiness=true. Elapsed: 22.130062845s Apr 8 22:01:59.118: INFO: Pod "pod-subpath-test-downwardapi-c8f7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.134024167s STEP: Saw pod success Apr 8 22:01:59.118: INFO: Pod "pod-subpath-test-downwardapi-c8f7" satisfied condition "success or failure" Apr 8 22:01:59.121: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-downwardapi-c8f7 container test-container-subpath-downwardapi-c8f7: STEP: delete the pod Apr 8 22:01:59.190: INFO: Waiting for pod pod-subpath-test-downwardapi-c8f7 to disappear Apr 8 22:01:59.194: INFO: Pod pod-subpath-test-downwardapi-c8f7 no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-c8f7 Apr 8 22:01:59.194: INFO: Deleting pod "pod-subpath-test-downwardapi-c8f7" in namespace "subpath-6449" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 22:01:59.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-6449" for this suite. • [SLOW TEST:24.350 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":278,"completed":182,"skipped":2936,"failed":0} SS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 22:01:59.223: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:178 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 22:01:59.308: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8420" for this suite. •{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":278,"completed":183,"skipped":2938,"failed":0} S ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 22:01:59.321: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on node default medium Apr 8 22:01:59.466: INFO: Waiting up to 5m0s for pod "pod-b0669c7e-3cde-453e-af5f-2bce46b33142" in namespace "emptydir-6486" to be "success or failure" Apr 8 22:01:59.476: INFO: Pod "pod-b0669c7e-3cde-453e-af5f-2bce46b33142": Phase="Pending", Reason="", readiness=false. Elapsed: 9.277109ms Apr 8 22:02:01.548: INFO: Pod "pod-b0669c7e-3cde-453e-af5f-2bce46b33142": Phase="Pending", Reason="", readiness=false. Elapsed: 2.081650461s Apr 8 22:02:03.566: INFO: Pod "pod-b0669c7e-3cde-453e-af5f-2bce46b33142": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.099180397s STEP: Saw pod success Apr 8 22:02:03.566: INFO: Pod "pod-b0669c7e-3cde-453e-af5f-2bce46b33142" satisfied condition "success or failure" Apr 8 22:02:03.568: INFO: Trying to get logs from node jerma-worker2 pod pod-b0669c7e-3cde-453e-af5f-2bce46b33142 container test-container: STEP: delete the pod Apr 8 22:02:03.632: INFO: Waiting for pod pod-b0669c7e-3cde-453e-af5f-2bce46b33142 to disappear Apr 8 22:02:03.706: INFO: Pod pod-b0669c7e-3cde-453e-af5f-2bce46b33142 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 22:02:03.706: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6486" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":184,"skipped":2939,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 22:02:03.733: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service externalname-service with the type=ExternalName in namespace services-251 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-251 I0408 22:02:03.897217 6 runners.go:189] Created replication controller with name: externalname-service, namespace: services-251, replica count: 2 I0408 22:02:06.947628 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0408 22:02:09.947840 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 8 22:02:09.947: INFO: Creating new exec pod Apr 8 22:02:14.984: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-251 execpodd55mn -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Apr 8 22:02:17.520: INFO: stderr: "I0408 22:02:17.437807 3403 log.go:172] (0xc0008b8bb0) (0xc000751540) Create stream\nI0408 22:02:17.437844 3403 log.go:172] (0xc0008b8bb0) (0xc000751540) Stream added, broadcasting: 1\nI0408 22:02:17.440319 3403 log.go:172] (0xc0008b8bb0) Reply frame received for 1\nI0408 22:02:17.440358 3403 log.go:172] (0xc0008b8bb0) (0xc0008b2000) Create stream\nI0408 22:02:17.440367 3403 log.go:172] (0xc0008b8bb0) (0xc0008b2000) Stream added, broadcasting: 3\nI0408 22:02:17.441462 3403 log.go:172] (0xc0008b8bb0) Reply frame received for 3\nI0408 22:02:17.441505 3403 log.go:172] (0xc0008b8bb0) (0xc0008a0000) Create stream\nI0408 22:02:17.441520 3403 log.go:172] (0xc0008b8bb0) (0xc0008a0000) Stream added, broadcasting: 5\nI0408 22:02:17.442385 3403 log.go:172] (0xc0008b8bb0) Reply frame received for 5\nI0408 22:02:17.514103 3403 log.go:172] (0xc0008b8bb0) Data frame received for 3\nI0408 22:02:17.514158 3403 log.go:172] (0xc0008b2000) (3) Data frame handling\nI0408 22:02:17.514186 3403 log.go:172] (0xc0008b8bb0) Data frame received for 5\nI0408 22:02:17.514206 3403 log.go:172] (0xc0008a0000) (5) Data frame handling\nI0408 22:02:17.514220 3403 log.go:172] (0xc0008a0000) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0408 22:02:17.514335 3403 log.go:172] (0xc0008b8bb0) Data frame received for 5\nI0408 22:02:17.514387 3403 log.go:172] (0xc0008a0000) (5) Data frame handling\nI0408 22:02:17.516030 3403 log.go:172] (0xc0008b8bb0) Data frame received for 1\nI0408 22:02:17.516052 3403 log.go:172] (0xc000751540) (1) Data frame handling\nI0408 22:02:17.516074 3403 log.go:172] (0xc000751540) (1) Data frame sent\nI0408 22:02:17.516093 3403 log.go:172] (0xc0008b8bb0) (0xc000751540) Stream removed, broadcasting: 1\nI0408 22:02:17.516376 3403 log.go:172] (0xc0008b8bb0) Go away received\nI0408 22:02:17.516441 3403 log.go:172] (0xc0008b8bb0) (0xc000751540) Stream removed, broadcasting: 1\nI0408 22:02:17.516457 3403 log.go:172] (0xc0008b8bb0) (0xc0008b2000) Stream removed, broadcasting: 3\nI0408 22:02:17.516465 3403 log.go:172] (0xc0008b8bb0) (0xc0008a0000) Stream removed, broadcasting: 5\n" Apr 8 22:02:17.520: INFO: stdout: "" Apr 8 22:02:17.521: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-251 execpodd55mn -- /bin/sh -x -c nc -zv -t -w 2 10.102.32.121 80' Apr 8 22:02:17.726: INFO: stderr: "I0408 22:02:17.659236 3435 log.go:172] (0xc000a40dc0) (0xc000a44460) Create stream\nI0408 22:02:17.659296 3435 log.go:172] (0xc000a40dc0) (0xc000a44460) Stream added, broadcasting: 1\nI0408 22:02:17.663150 3435 log.go:172] (0xc000a40dc0) Reply frame received for 1\nI0408 22:02:17.663183 3435 log.go:172] (0xc000a40dc0) (0xc0005a6820) Create stream\nI0408 22:02:17.663191 3435 log.go:172] (0xc000a40dc0) (0xc0005a6820) Stream added, broadcasting: 3\nI0408 22:02:17.663953 3435 log.go:172] (0xc000a40dc0) Reply frame received for 3\nI0408 22:02:17.663992 3435 log.go:172] (0xc000a40dc0) (0xc00026f5e0) Create stream\nI0408 22:02:17.664004 3435 log.go:172] (0xc000a40dc0) (0xc00026f5e0) Stream added, broadcasting: 5\nI0408 22:02:17.664624 3435 log.go:172] (0xc000a40dc0) Reply frame received for 5\nI0408 22:02:17.720379 3435 log.go:172] (0xc000a40dc0) Data frame received for 3\nI0408 22:02:17.720432 3435 log.go:172] (0xc0005a6820) (3) Data frame handling\nI0408 22:02:17.720465 3435 log.go:172] (0xc000a40dc0) Data frame received for 5\nI0408 22:02:17.720484 3435 log.go:172] (0xc00026f5e0) (5) Data frame handling\nI0408 22:02:17.720509 3435 log.go:172] (0xc00026f5e0) (5) Data frame sent\nI0408 22:02:17.720524 3435 log.go:172] (0xc000a40dc0) Data frame received for 5\nI0408 22:02:17.720536 3435 log.go:172] (0xc00026f5e0) (5) Data frame handling\n+ nc -zv -t -w 2 10.102.32.121 80\nConnection to 10.102.32.121 80 port [tcp/http] succeeded!\nI0408 22:02:17.721778 3435 log.go:172] (0xc000a40dc0) Data frame received for 1\nI0408 22:02:17.721811 3435 log.go:172] (0xc000a44460) (1) Data frame handling\nI0408 22:02:17.721843 3435 log.go:172] (0xc000a44460) (1) Data frame sent\nI0408 22:02:17.721865 3435 log.go:172] (0xc000a40dc0) (0xc000a44460) Stream removed, broadcasting: 1\nI0408 22:02:17.721889 3435 log.go:172] (0xc000a40dc0) Go away received\nI0408 22:02:17.722211 3435 log.go:172] (0xc000a40dc0) (0xc000a44460) Stream removed, broadcasting: 1\nI0408 22:02:17.722231 3435 log.go:172] (0xc000a40dc0) (0xc0005a6820) Stream removed, broadcasting: 3\nI0408 22:02:17.722241 3435 log.go:172] (0xc000a40dc0) (0xc00026f5e0) Stream removed, broadcasting: 5\n" Apr 8 22:02:17.727: INFO: stdout: "" Apr 8 22:02:17.727: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 22:02:17.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-251" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:14.060 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":278,"completed":185,"skipped":2980,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 22:02:17.793: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 8 22:02:17.882: INFO: Waiting up to 5m0s for pod "downwardapi-volume-38ed1f94-4d5a-4860-b8d4-8c9b39b324c5" in namespace "projected-6511" to be "success or failure" Apr 8 22:02:17.908: INFO: Pod "downwardapi-volume-38ed1f94-4d5a-4860-b8d4-8c9b39b324c5": Phase="Pending", Reason="", readiness=false. Elapsed: 26.257358ms Apr 8 22:02:19.912: INFO: Pod "downwardapi-volume-38ed1f94-4d5a-4860-b8d4-8c9b39b324c5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030551094s Apr 8 22:02:21.916: INFO: Pod "downwardapi-volume-38ed1f94-4d5a-4860-b8d4-8c9b39b324c5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034712588s STEP: Saw pod success Apr 8 22:02:21.916: INFO: Pod "downwardapi-volume-38ed1f94-4d5a-4860-b8d4-8c9b39b324c5" satisfied condition "success or failure" Apr 8 22:02:21.919: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-38ed1f94-4d5a-4860-b8d4-8c9b39b324c5 container client-container: STEP: delete the pod Apr 8 22:02:21.937: INFO: Waiting for pod downwardapi-volume-38ed1f94-4d5a-4860-b8d4-8c9b39b324c5 to disappear Apr 8 22:02:21.961: INFO: Pod downwardapi-volume-38ed1f94-4d5a-4860-b8d4-8c9b39b324c5 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 22:02:21.961: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6511" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":186,"skipped":2990,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Lease lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 22:02:22.014: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 22:02:22.152: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-4444" for this suite. •{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":278,"completed":187,"skipped":3008,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 22:02:22.159: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service nodeport-test with type=NodePort in namespace services-7989 STEP: creating replication controller nodeport-test in namespace services-7989 I0408 22:02:22.306761 6 runners.go:189] Created replication controller with name: nodeport-test, namespace: services-7989, replica count: 2 I0408 22:02:25.357260 6 runners.go:189] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0408 22:02:28.357414 6 runners.go:189] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 8 22:02:28.357: INFO: Creating new exec pod Apr 8 22:02:33.403: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-7989 execpodrqw69 -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80' Apr 8 22:02:33.632: INFO: stderr: "I0408 22:02:33.535822 3458 log.go:172] (0xc000a2ef20) (0xc0004d2640) Create stream\nI0408 22:02:33.535884 3458 log.go:172] (0xc000a2ef20) (0xc0004d2640) Stream added, broadcasting: 1\nI0408 22:02:33.538469 3458 log.go:172] (0xc000a2ef20) Reply frame received for 1\nI0408 22:02:33.538514 3458 log.go:172] (0xc000a2ef20) (0xc000a32000) Create stream\nI0408 22:02:33.538530 3458 log.go:172] (0xc000a2ef20) (0xc000a32000) Stream added, broadcasting: 3\nI0408 22:02:33.539274 3458 log.go:172] (0xc000a2ef20) Reply frame received for 3\nI0408 22:02:33.539298 3458 log.go:172] (0xc000a2ef20) (0xc0004d26e0) Create stream\nI0408 22:02:33.539305 3458 log.go:172] (0xc000a2ef20) (0xc0004d26e0) Stream added, broadcasting: 5\nI0408 22:02:33.540205 3458 log.go:172] (0xc000a2ef20) Reply frame received for 5\nI0408 22:02:33.624044 3458 log.go:172] (0xc000a2ef20) Data frame received for 5\nI0408 22:02:33.624073 3458 log.go:172] (0xc0004d26e0) (5) Data frame handling\nI0408 22:02:33.624090 3458 log.go:172] (0xc0004d26e0) (5) Data frame sent\nI0408 22:02:33.624097 3458 log.go:172] (0xc000a2ef20) Data frame received for 5\nI0408 22:02:33.624104 3458 log.go:172] (0xc0004d26e0) (5) Data frame handling\n+ nc -zv -t -w 2 nodeport-test 80\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0408 22:02:33.624137 3458 log.go:172] (0xc0004d26e0) (5) Data frame sent\nI0408 22:02:33.624151 3458 log.go:172] (0xc000a2ef20) Data frame received for 5\nI0408 22:02:33.624162 3458 log.go:172] (0xc0004d26e0) (5) Data frame handling\nI0408 22:02:33.624191 3458 log.go:172] (0xc000a2ef20) Data frame received for 3\nI0408 22:02:33.624228 3458 log.go:172] (0xc000a32000) (3) Data frame handling\nI0408 22:02:33.626288 3458 log.go:172] (0xc000a2ef20) Data frame received for 1\nI0408 22:02:33.626318 3458 log.go:172] (0xc0004d2640) (1) Data frame handling\nI0408 22:02:33.626335 3458 log.go:172] (0xc0004d2640) (1) Data frame sent\nI0408 22:02:33.626350 3458 log.go:172] (0xc000a2ef20) (0xc0004d2640) Stream removed, broadcasting: 1\nI0408 22:02:33.626408 3458 log.go:172] (0xc000a2ef20) Go away received\nI0408 22:02:33.627446 3458 log.go:172] (0xc000a2ef20) (0xc0004d2640) Stream removed, broadcasting: 1\nI0408 22:02:33.627492 3458 log.go:172] (0xc000a2ef20) (0xc000a32000) Stream removed, broadcasting: 3\nI0408 22:02:33.627515 3458 log.go:172] (0xc000a2ef20) (0xc0004d26e0) Stream removed, broadcasting: 5\n" Apr 8 22:02:33.632: INFO: stdout: "" Apr 8 22:02:33.633: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-7989 execpodrqw69 -- /bin/sh -x -c nc -zv -t -w 2 10.107.102.1 80' Apr 8 22:02:33.842: INFO: stderr: "I0408 22:02:33.770250 3478 log.go:172] (0xc000a16630) (0xc000763360) Create stream\nI0408 22:02:33.770328 3478 log.go:172] (0xc000a16630) (0xc000763360) Stream added, broadcasting: 1\nI0408 22:02:33.772884 3478 log.go:172] (0xc000a16630) Reply frame received for 1\nI0408 22:02:33.772933 3478 log.go:172] (0xc000a16630) (0xc0006d7900) Create stream\nI0408 22:02:33.772946 3478 log.go:172] (0xc000a16630) (0xc0006d7900) Stream added, broadcasting: 3\nI0408 22:02:33.774088 3478 log.go:172] (0xc000a16630) Reply frame received for 3\nI0408 22:02:33.774126 3478 log.go:172] (0xc000a16630) (0xc0009dc000) Create stream\nI0408 22:02:33.774140 3478 log.go:172] (0xc000a16630) (0xc0009dc000) Stream added, broadcasting: 5\nI0408 22:02:33.775071 3478 log.go:172] (0xc000a16630) Reply frame received for 5\nI0408 22:02:33.834939 3478 log.go:172] (0xc000a16630) Data frame received for 3\nI0408 22:02:33.834983 3478 log.go:172] (0xc000a16630) Data frame received for 5\nI0408 22:02:33.835011 3478 log.go:172] (0xc0009dc000) (5) Data frame handling\nI0408 22:02:33.835026 3478 log.go:172] (0xc0009dc000) (5) Data frame sent\nI0408 22:02:33.835035 3478 log.go:172] (0xc000a16630) Data frame received for 5\nI0408 22:02:33.835042 3478 log.go:172] (0xc0009dc000) (5) Data frame handling\n+ nc -zv -t -w 2 10.107.102.1 80\nConnection to 10.107.102.1 80 port [tcp/http] succeeded!\nI0408 22:02:33.835061 3478 log.go:172] (0xc0006d7900) (3) Data frame handling\nI0408 22:02:33.836606 3478 log.go:172] (0xc000a16630) Data frame received for 1\nI0408 22:02:33.836625 3478 log.go:172] (0xc000763360) (1) Data frame handling\nI0408 22:02:33.836643 3478 log.go:172] (0xc000763360) (1) Data frame sent\nI0408 22:02:33.836658 3478 log.go:172] (0xc000a16630) (0xc000763360) Stream removed, broadcasting: 1\nI0408 22:02:33.836853 3478 log.go:172] (0xc000a16630) Go away received\nI0408 22:02:33.836959 3478 log.go:172] (0xc000a16630) (0xc000763360) Stream removed, broadcasting: 1\nI0408 22:02:33.836978 3478 log.go:172] (0xc000a16630) (0xc0006d7900) Stream removed, broadcasting: 3\nI0408 22:02:33.836988 3478 log.go:172] (0xc000a16630) (0xc0009dc000) Stream removed, broadcasting: 5\n" Apr 8 22:02:33.842: INFO: stdout: "" Apr 8 22:02:33.842: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-7989 execpodrqw69 -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.10 30904' Apr 8 22:02:34.046: INFO: stderr: "I0408 22:02:33.980578 3501 log.go:172] (0xc000bb0000) (0xc000970000) Create stream\nI0408 22:02:33.980657 3501 log.go:172] (0xc000bb0000) (0xc000970000) Stream added, broadcasting: 1\nI0408 22:02:33.983359 3501 log.go:172] (0xc000bb0000) Reply frame received for 1\nI0408 22:02:33.983388 3501 log.go:172] (0xc000bb0000) (0xc0009700a0) Create stream\nI0408 22:02:33.983398 3501 log.go:172] (0xc000bb0000) (0xc0009700a0) Stream added, broadcasting: 3\nI0408 22:02:33.984311 3501 log.go:172] (0xc000bb0000) Reply frame received for 3\nI0408 22:02:33.984355 3501 log.go:172] (0xc000bb0000) (0xc0006ddb80) Create stream\nI0408 22:02:33.984370 3501 log.go:172] (0xc000bb0000) (0xc0006ddb80) Stream added, broadcasting: 5\nI0408 22:02:33.985503 3501 log.go:172] (0xc000bb0000) Reply frame received for 5\nI0408 22:02:34.039525 3501 log.go:172] (0xc000bb0000) Data frame received for 5\nI0408 22:02:34.039557 3501 log.go:172] (0xc0006ddb80) (5) Data frame handling\nI0408 22:02:34.039578 3501 log.go:172] (0xc0006ddb80) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.10 30904\nConnection to 172.17.0.10 30904 port [tcp/30904] succeeded!\nI0408 22:02:34.039594 3501 log.go:172] (0xc000bb0000) Data frame received for 5\nI0408 22:02:34.039648 3501 log.go:172] (0xc0006ddb80) (5) Data frame handling\nI0408 22:02:34.039702 3501 log.go:172] (0xc000bb0000) Data frame received for 3\nI0408 22:02:34.039724 3501 log.go:172] (0xc0009700a0) (3) Data frame handling\nI0408 22:02:34.041832 3501 log.go:172] (0xc000bb0000) Data frame received for 1\nI0408 22:02:34.041848 3501 log.go:172] (0xc000970000) (1) Data frame handling\nI0408 22:02:34.041857 3501 log.go:172] (0xc000970000) (1) Data frame sent\nI0408 22:02:34.041866 3501 log.go:172] (0xc000bb0000) (0xc000970000) Stream removed, broadcasting: 1\nI0408 22:02:34.041900 3501 log.go:172] (0xc000bb0000) Go away received\nI0408 22:02:34.042107 3501 log.go:172] (0xc000bb0000) (0xc000970000) Stream removed, broadcasting: 1\nI0408 22:02:34.042124 3501 log.go:172] (0xc000bb0000) (0xc0009700a0) Stream removed, broadcasting: 3\nI0408 22:02:34.042133 3501 log.go:172] (0xc000bb0000) (0xc0006ddb80) Stream removed, broadcasting: 5\n" Apr 8 22:02:34.046: INFO: stdout: "" Apr 8 22:02:34.046: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-7989 execpodrqw69 -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.8 30904' Apr 8 22:02:34.255: INFO: stderr: "I0408 22:02:34.167477 3521 log.go:172] (0xc0005a0160) (0xc000968000) Create stream\nI0408 22:02:34.167537 3521 log.go:172] (0xc0005a0160) (0xc000968000) Stream added, broadcasting: 1\nI0408 22:02:34.170899 3521 log.go:172] (0xc0005a0160) Reply frame received for 1\nI0408 22:02:34.170937 3521 log.go:172] (0xc0005a0160) (0xc0009680a0) Create stream\nI0408 22:02:34.170947 3521 log.go:172] (0xc0005a0160) (0xc0009680a0) Stream added, broadcasting: 3\nI0408 22:02:34.171917 3521 log.go:172] (0xc0005a0160) Reply frame received for 3\nI0408 22:02:34.171963 3521 log.go:172] (0xc0005a0160) (0xc000984000) Create stream\nI0408 22:02:34.171982 3521 log.go:172] (0xc0005a0160) (0xc000984000) Stream added, broadcasting: 5\nI0408 22:02:34.172928 3521 log.go:172] (0xc0005a0160) Reply frame received for 5\nI0408 22:02:34.249254 3521 log.go:172] (0xc0005a0160) Data frame received for 5\nI0408 22:02:34.249324 3521 log.go:172] (0xc000984000) (5) Data frame handling\nI0408 22:02:34.249350 3521 log.go:172] (0xc000984000) (5) Data frame sent\nI0408 22:02:34.249368 3521 log.go:172] (0xc0005a0160) Data frame received for 5\nI0408 22:02:34.249382 3521 log.go:172] (0xc000984000) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.8 30904\nConnection to 172.17.0.8 30904 port [tcp/30904] succeeded!\nI0408 22:02:34.249397 3521 log.go:172] (0xc0005a0160) Data frame received for 3\nI0408 22:02:34.249463 3521 log.go:172] (0xc0009680a0) (3) Data frame handling\nI0408 22:02:34.251047 3521 log.go:172] (0xc0005a0160) Data frame received for 1\nI0408 22:02:34.251069 3521 log.go:172] (0xc000968000) (1) Data frame handling\nI0408 22:02:34.251081 3521 log.go:172] (0xc000968000) (1) Data frame sent\nI0408 22:02:34.251098 3521 log.go:172] (0xc0005a0160) (0xc000968000) Stream removed, broadcasting: 1\nI0408 22:02:34.251120 3521 log.go:172] (0xc0005a0160) Go away received\nI0408 22:02:34.251472 3521 log.go:172] (0xc0005a0160) (0xc000968000) Stream removed, broadcasting: 1\nI0408 22:02:34.251492 3521 log.go:172] (0xc0005a0160) (0xc0009680a0) Stream removed, broadcasting: 3\nI0408 22:02:34.251503 3521 log.go:172] (0xc0005a0160) (0xc000984000) Stream removed, broadcasting: 5\n" Apr 8 22:02:34.255: INFO: stdout: "" [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 22:02:34.256: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7989" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:12.105 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":278,"completed":188,"skipped":3029,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 22:02:34.264: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0408 22:02:45.742420 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 8 22:02:45.742: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 22:02:45.742: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9619" for this suite. • [SLOW TEST:11.486 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":278,"completed":189,"skipped":3044,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 22:02:45.752: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 8 22:02:46.261: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 8 22:02:48.304: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721980166, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721980166, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721980166, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721980166, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 8 22:02:51.363: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 22:02:52.799: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8007" for this suite. STEP: Destroying namespace "webhook-8007-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.362 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":278,"completed":190,"skipped":3097,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 22:02:53.114: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: validating api versions Apr 8 22:02:53.220: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' Apr 8 22:02:53.506: INFO: stderr: "" Apr 8 22:02:53.506: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 22:02:53.506: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8185" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":278,"completed":191,"skipped":3110,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 22:02:53.516: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on node default medium Apr 8 22:02:53.749: INFO: Waiting up to 5m0s for pod "pod-741f3300-0d21-40ef-b1da-05fec2f3ae48" in namespace "emptydir-9989" to be "success or failure" Apr 8 22:02:53.760: INFO: Pod "pod-741f3300-0d21-40ef-b1da-05fec2f3ae48": Phase="Pending", Reason="", readiness=false. Elapsed: 10.25593ms Apr 8 22:02:55.764: INFO: Pod "pod-741f3300-0d21-40ef-b1da-05fec2f3ae48": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014288971s Apr 8 22:02:57.768: INFO: Pod "pod-741f3300-0d21-40ef-b1da-05fec2f3ae48": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018625766s STEP: Saw pod success Apr 8 22:02:57.768: INFO: Pod "pod-741f3300-0d21-40ef-b1da-05fec2f3ae48" satisfied condition "success or failure" Apr 8 22:02:57.771: INFO: Trying to get logs from node jerma-worker pod pod-741f3300-0d21-40ef-b1da-05fec2f3ae48 container test-container: STEP: delete the pod Apr 8 22:02:57.789: INFO: Waiting for pod pod-741f3300-0d21-40ef-b1da-05fec2f3ae48 to disappear Apr 8 22:02:57.793: INFO: Pod pod-741f3300-0d21-40ef-b1da-05fec2f3ae48 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 22:02:57.793: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9989" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":192,"skipped":3153,"failed":0} SSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 22:02:57.800: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 8 22:02:57.913: INFO: Waiting up to 5m0s for pod "downwardapi-volume-76a67ad9-40e3-4087-aaf5-e0047644ea18" in namespace "downward-api-8588" to be "success or failure" Apr 8 22:02:57.931: INFO: Pod "downwardapi-volume-76a67ad9-40e3-4087-aaf5-e0047644ea18": Phase="Pending", Reason="", readiness=false. Elapsed: 18.084325ms Apr 8 22:02:59.936: INFO: Pod "downwardapi-volume-76a67ad9-40e3-4087-aaf5-e0047644ea18": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022308365s Apr 8 22:03:01.939: INFO: Pod "downwardapi-volume-76a67ad9-40e3-4087-aaf5-e0047644ea18": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026131395s STEP: Saw pod success Apr 8 22:03:01.939: INFO: Pod "downwardapi-volume-76a67ad9-40e3-4087-aaf5-e0047644ea18" satisfied condition "success or failure" Apr 8 22:03:01.942: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-76a67ad9-40e3-4087-aaf5-e0047644ea18 container client-container: STEP: delete the pod Apr 8 22:03:02.053: INFO: Waiting for pod downwardapi-volume-76a67ad9-40e3-4087-aaf5-e0047644ea18 to disappear Apr 8 22:03:02.106: INFO: Pod downwardapi-volume-76a67ad9-40e3-4087-aaf5-e0047644ea18 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 22:03:02.106: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8588" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":193,"skipped":3159,"failed":0} SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 22:03:02.113: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: validating cluster-info Apr 8 22:03:02.322: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' Apr 8 22:03:02.408: INFO: stderr: "" Apr 8 22:03:02.408: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32770\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32770/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 22:03:02.408: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4137" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance]","total":278,"completed":194,"skipped":3169,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 22:03:02.414: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-3e5c661d-a854-401d-a41c-a67f5a6e23e9 STEP: Creating a pod to test consume configMaps Apr 8 22:03:02.510: INFO: Waiting up to 5m0s for pod "pod-configmaps-5332389d-3cdd-4baf-aea8-178d753e07cc" in namespace "configmap-6068" to be "success or failure" Apr 8 22:03:02.520: INFO: Pod "pod-configmaps-5332389d-3cdd-4baf-aea8-178d753e07cc": Phase="Pending", Reason="", readiness=false. Elapsed: 10.071424ms Apr 8 22:03:04.524: INFO: Pod "pod-configmaps-5332389d-3cdd-4baf-aea8-178d753e07cc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014346354s Apr 8 22:03:06.529: INFO: Pod "pod-configmaps-5332389d-3cdd-4baf-aea8-178d753e07cc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01886065s STEP: Saw pod success Apr 8 22:03:06.529: INFO: Pod "pod-configmaps-5332389d-3cdd-4baf-aea8-178d753e07cc" satisfied condition "success or failure" Apr 8 22:03:06.532: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-5332389d-3cdd-4baf-aea8-178d753e07cc container configmap-volume-test: STEP: delete the pod Apr 8 22:03:06.552: INFO: Waiting for pod pod-configmaps-5332389d-3cdd-4baf-aea8-178d753e07cc to disappear Apr 8 22:03:06.573: INFO: Pod pod-configmaps-5332389d-3cdd-4baf-aea8-178d753e07cc no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 22:03:06.573: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6068" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":195,"skipped":3207,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 22:03:06.580: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 8 22:03:06.625: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 22:03:10.679: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9827" for this suite. •{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":278,"completed":196,"skipped":3236,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 22:03:10.713: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5817.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5817.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 8 22:03:14.883: INFO: DNS probes using dns-5817/dns-test-0027166d-243d-42b0-80ee-1af006c301b1 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 22:03:14.914: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5817" for this suite. •{"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":278,"completed":197,"skipped":3375,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 22:03:14.963: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 22:03:15.057: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-328" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":278,"completed":198,"skipped":3395,"failed":0} SSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 22:03:15.100: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Apr 8 22:03:15.423: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 8 22:03:15.448: INFO: Waiting for terminating namespaces to be deleted... Apr 8 22:03:15.450: INFO: Logging pods the kubelet thinks is on node jerma-worker before test Apr 8 22:03:15.455: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 8 22:03:15.455: INFO: Container kindnet-cni ready: true, restart count 0 Apr 8 22:03:15.455: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 8 22:03:15.455: INFO: Container kube-proxy ready: true, restart count 0 Apr 8 22:03:15.455: INFO: pod-logs-websocket-43917b94-b82f-4b37-badf-0b4f6d3a156e from pods-9827 started at 2020-04-08 22:03:06 +0000 UTC (1 container statuses recorded) Apr 8 22:03:15.455: INFO: Container main ready: true, restart count 0 Apr 8 22:03:15.455: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test Apr 8 22:03:15.461: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) Apr 8 22:03:15.461: INFO: Container kube-bench ready: false, restart count 0 Apr 8 22:03:15.461: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 8 22:03:15.461: INFO: Container kindnet-cni ready: true, restart count 0 Apr 8 22:03:15.461: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 8 22:03:15.461: INFO: Container kube-proxy ready: true, restart count 0 Apr 8 22:03:15.461: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) Apr 8 22:03:15.461: INFO: Container kube-hunter ready: false, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: verifying the node has the label node jerma-worker STEP: verifying the node has the label node jerma-worker2 Apr 8 22:03:15.833: INFO: Pod kindnet-c5svj requesting resource cpu=100m on Node jerma-worker Apr 8 22:03:15.833: INFO: Pod kindnet-zk6sq requesting resource cpu=100m on Node jerma-worker2 Apr 8 22:03:15.833: INFO: Pod kube-proxy-44mlz requesting resource cpu=0m on Node jerma-worker Apr 8 22:03:15.833: INFO: Pod kube-proxy-75q42 requesting resource cpu=0m on Node jerma-worker2 Apr 8 22:03:15.833: INFO: Pod pod-logs-websocket-43917b94-b82f-4b37-badf-0b4f6d3a156e requesting resource cpu=0m on Node jerma-worker STEP: Starting Pods to consume most of the cluster CPU. Apr 8 22:03:15.833: INFO: Creating a pod which consumes cpu=11130m on Node jerma-worker Apr 8 22:03:15.882: INFO: Creating a pod which consumes cpu=11130m on Node jerma-worker2 STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-727b7975-4c25-4cec-9f6f-53e91269d355.1603f74b30c06181], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5311/filler-pod-727b7975-4c25-4cec-9f6f-53e91269d355 to jerma-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-727b7975-4c25-4cec-9f6f-53e91269d355.1603f74b8d3dfc03], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-727b7975-4c25-4cec-9f6f-53e91269d355.1603f74be5c0fae9], Reason = [Created], Message = [Created container filler-pod-727b7975-4c25-4cec-9f6f-53e91269d355] STEP: Considering event: Type = [Normal], Name = [filler-pod-727b7975-4c25-4cec-9f6f-53e91269d355.1603f74bf76fbe31], Reason = [Started], Message = [Started container filler-pod-727b7975-4c25-4cec-9f6f-53e91269d355] STEP: Considering event: Type = [Normal], Name = [filler-pod-c54bcf47-de07-4380-8066-a411f6cbc591.1603f74b30fd5292], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5311/filler-pod-c54bcf47-de07-4380-8066-a411f6cbc591 to jerma-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-c54bcf47-de07-4380-8066-a411f6cbc591.1603f74bc490a1d4], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-c54bcf47-de07-4380-8066-a411f6cbc591.1603f74c112f722a], Reason = [Created], Message = [Created container filler-pod-c54bcf47-de07-4380-8066-a411f6cbc591] STEP: Considering event: Type = [Normal], Name = [filler-pod-c54bcf47-de07-4380-8066-a411f6cbc591.1603f74c22a4ebf4], Reason = [Started], Message = [Started container filler-pod-c54bcf47-de07-4380-8066-a411f6cbc591] STEP: Considering event: Type = [Warning], Name = [additional-pod.1603f74c978b5aa3], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node jerma-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node jerma-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 22:03:23.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-5311" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:8.065 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":278,"completed":199,"skipped":3405,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 22:03:23.166: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 8 22:03:23.243: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Apr 8 22:03:25.141: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9914 create -f -' Apr 8 22:03:28.006: INFO: stderr: "" Apr 8 22:03:28.006: INFO: stdout: "e2e-test-crd-publish-openapi-8585-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Apr 8 22:03:28.006: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9914 delete e2e-test-crd-publish-openapi-8585-crds test-cr' Apr 8 22:03:28.098: INFO: stderr: "" Apr 8 22:03:28.098: INFO: stdout: "e2e-test-crd-publish-openapi-8585-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" Apr 8 22:03:28.098: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9914 apply -f -' Apr 8 22:03:28.407: INFO: stderr: "" Apr 8 22:03:28.407: INFO: stdout: "e2e-test-crd-publish-openapi-8585-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Apr 8 22:03:28.407: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9914 delete e2e-test-crd-publish-openapi-8585-crds test-cr' Apr 8 22:03:28.511: INFO: stderr: "" Apr 8 22:03:28.511: INFO: stdout: "e2e-test-crd-publish-openapi-8585-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Apr 8 22:03:28.511: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8585-crds' Apr 8 22:03:29.190: INFO: stderr: "" Apr 8 22:03:29.190: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-8585-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 22:03:32.085: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-9914" for this suite. • [SLOW TEST:8.928 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":278,"completed":200,"skipped":3412,"failed":0} SS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 22:03:32.094: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-projected-6kfl STEP: Creating a pod to test atomic-volume-subpath Apr 8 22:03:32.211: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-6kfl" in namespace "subpath-1336" to be "success or failure" Apr 8 22:03:32.214: INFO: Pod "pod-subpath-test-projected-6kfl": Phase="Pending", Reason="", readiness=false. Elapsed: 2.772117ms Apr 8 22:03:34.286: INFO: Pod "pod-subpath-test-projected-6kfl": Phase="Pending", Reason="", readiness=false. Elapsed: 2.074627702s Apr 8 22:03:36.289: INFO: Pod "pod-subpath-test-projected-6kfl": Phase="Running", Reason="", readiness=true. Elapsed: 4.078196779s Apr 8 22:03:38.294: INFO: Pod "pod-subpath-test-projected-6kfl": Phase="Running", Reason="", readiness=true. Elapsed: 6.08234302s Apr 8 22:03:40.298: INFO: Pod "pod-subpath-test-projected-6kfl": Phase="Running", Reason="", readiness=true. Elapsed: 8.086767837s Apr 8 22:03:42.302: INFO: Pod "pod-subpath-test-projected-6kfl": Phase="Running", Reason="", readiness=true. Elapsed: 10.090796585s Apr 8 22:03:44.306: INFO: Pod "pod-subpath-test-projected-6kfl": Phase="Running", Reason="", readiness=true. Elapsed: 12.09466806s Apr 8 22:03:46.310: INFO: Pod "pod-subpath-test-projected-6kfl": Phase="Running", Reason="", readiness=true. Elapsed: 14.099008454s Apr 8 22:03:48.314: INFO: Pod "pod-subpath-test-projected-6kfl": Phase="Running", Reason="", readiness=true. Elapsed: 16.103214252s Apr 8 22:03:50.319: INFO: Pod "pod-subpath-test-projected-6kfl": Phase="Running", Reason="", readiness=true. Elapsed: 18.108029221s Apr 8 22:03:52.323: INFO: Pod "pod-subpath-test-projected-6kfl": Phase="Running", Reason="", readiness=true. Elapsed: 20.112162661s Apr 8 22:03:54.328: INFO: Pod "pod-subpath-test-projected-6kfl": Phase="Running", Reason="", readiness=true. Elapsed: 22.116719318s Apr 8 22:03:56.333: INFO: Pod "pod-subpath-test-projected-6kfl": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.121571368s STEP: Saw pod success Apr 8 22:03:56.333: INFO: Pod "pod-subpath-test-projected-6kfl" satisfied condition "success or failure" Apr 8 22:03:56.336: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-projected-6kfl container test-container-subpath-projected-6kfl: STEP: delete the pod Apr 8 22:03:56.377: INFO: Waiting for pod pod-subpath-test-projected-6kfl to disappear Apr 8 22:03:56.384: INFO: Pod pod-subpath-test-projected-6kfl no longer exists STEP: Deleting pod pod-subpath-test-projected-6kfl Apr 8 22:03:56.384: INFO: Deleting pod "pod-subpath-test-projected-6kfl" in namespace "subpath-1336" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 22:03:56.386: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-1336" for this suite. • [SLOW TEST:24.298 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":278,"completed":201,"skipped":3414,"failed":0} [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 22:03:56.392: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 22:04:08.474: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-9487" for this suite. • [SLOW TEST:12.089 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":278,"completed":202,"skipped":3414,"failed":0} SSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 22:04:08.481: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation Apr 8 22:04:08.560: INFO: >>> kubeConfig: /root/.kube/config Apr 8 22:04:10.500: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 22:04:21.013: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-8285" for this suite. • [SLOW TEST:12.544 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":278,"completed":203,"skipped":3418,"failed":0} [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 22:04:21.025: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6461.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-6461.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6461.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-6461.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 8 22:04:27.150: INFO: DNS probes using dns-test-f413e0ae-ffea-433e-a229-9ba80e8d9aff succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6461.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-6461.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6461.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-6461.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 8 22:04:33.247: INFO: File wheezy_udp@dns-test-service-3.dns-6461.svc.cluster.local from pod dns-6461/dns-test-51f7d649-f774-4498-9e60-a7d1960d2712 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 8 22:04:33.255: INFO: File jessie_udp@dns-test-service-3.dns-6461.svc.cluster.local from pod dns-6461/dns-test-51f7d649-f774-4498-9e60-a7d1960d2712 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 8 22:04:33.255: INFO: Lookups using dns-6461/dns-test-51f7d649-f774-4498-9e60-a7d1960d2712 failed for: [wheezy_udp@dns-test-service-3.dns-6461.svc.cluster.local jessie_udp@dns-test-service-3.dns-6461.svc.cluster.local] Apr 8 22:04:38.260: INFO: File wheezy_udp@dns-test-service-3.dns-6461.svc.cluster.local from pod dns-6461/dns-test-51f7d649-f774-4498-9e60-a7d1960d2712 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 8 22:04:38.267: INFO: File jessie_udp@dns-test-service-3.dns-6461.svc.cluster.local from pod dns-6461/dns-test-51f7d649-f774-4498-9e60-a7d1960d2712 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 8 22:04:38.267: INFO: Lookups using dns-6461/dns-test-51f7d649-f774-4498-9e60-a7d1960d2712 failed for: [wheezy_udp@dns-test-service-3.dns-6461.svc.cluster.local jessie_udp@dns-test-service-3.dns-6461.svc.cluster.local] Apr 8 22:04:43.261: INFO: File wheezy_udp@dns-test-service-3.dns-6461.svc.cluster.local from pod dns-6461/dns-test-51f7d649-f774-4498-9e60-a7d1960d2712 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 8 22:04:43.264: INFO: File jessie_udp@dns-test-service-3.dns-6461.svc.cluster.local from pod dns-6461/dns-test-51f7d649-f774-4498-9e60-a7d1960d2712 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 8 22:04:43.264: INFO: Lookups using dns-6461/dns-test-51f7d649-f774-4498-9e60-a7d1960d2712 failed for: [wheezy_udp@dns-test-service-3.dns-6461.svc.cluster.local jessie_udp@dns-test-service-3.dns-6461.svc.cluster.local] Apr 8 22:04:48.260: INFO: File wheezy_udp@dns-test-service-3.dns-6461.svc.cluster.local from pod dns-6461/dns-test-51f7d649-f774-4498-9e60-a7d1960d2712 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 8 22:04:48.264: INFO: File jessie_udp@dns-test-service-3.dns-6461.svc.cluster.local from pod dns-6461/dns-test-51f7d649-f774-4498-9e60-a7d1960d2712 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 8 22:04:48.264: INFO: Lookups using dns-6461/dns-test-51f7d649-f774-4498-9e60-a7d1960d2712 failed for: [wheezy_udp@dns-test-service-3.dns-6461.svc.cluster.local jessie_udp@dns-test-service-3.dns-6461.svc.cluster.local] Apr 8 22:04:53.261: INFO: File wheezy_udp@dns-test-service-3.dns-6461.svc.cluster.local from pod dns-6461/dns-test-51f7d649-f774-4498-9e60-a7d1960d2712 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 8 22:04:53.265: INFO: File jessie_udp@dns-test-service-3.dns-6461.svc.cluster.local from pod dns-6461/dns-test-51f7d649-f774-4498-9e60-a7d1960d2712 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 8 22:04:53.265: INFO: Lookups using dns-6461/dns-test-51f7d649-f774-4498-9e60-a7d1960d2712 failed for: [wheezy_udp@dns-test-service-3.dns-6461.svc.cluster.local jessie_udp@dns-test-service-3.dns-6461.svc.cluster.local] Apr 8 22:04:58.264: INFO: DNS probes using dns-test-51f7d649-f774-4498-9e60-a7d1960d2712 succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6461.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-6461.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6461.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-6461.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 8 22:05:02.839: INFO: DNS probes using dns-test-142bcf02-630d-4d1f-83c5-65e7c8d1f40c succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 22:05:02.919: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6461" for this suite. • [SLOW TEST:41.915 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":278,"completed":204,"skipped":3418,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 22:05:02.942: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 22:05:14.386: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4978" for this suite. • [SLOW TEST:11.454 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":278,"completed":205,"skipped":3462,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 22:05:14.397: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Apr 8 22:05:14.452: INFO: Waiting up to 5m0s for pod "downward-api-6cfcc4e0-247b-48fd-b940-fc3406861fa4" in namespace "downward-api-2936" to be "success or failure" Apr 8 22:05:14.455: INFO: Pod "downward-api-6cfcc4e0-247b-48fd-b940-fc3406861fa4": Phase="Pending", Reason="", readiness=false. Elapsed: 3.64787ms Apr 8 22:05:16.473: INFO: Pod "downward-api-6cfcc4e0-247b-48fd-b940-fc3406861fa4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021192152s Apr 8 22:05:18.478: INFO: Pod "downward-api-6cfcc4e0-247b-48fd-b940-fc3406861fa4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02590259s STEP: Saw pod success Apr 8 22:05:18.478: INFO: Pod "downward-api-6cfcc4e0-247b-48fd-b940-fc3406861fa4" satisfied condition "success or failure" Apr 8 22:05:18.481: INFO: Trying to get logs from node jerma-worker pod downward-api-6cfcc4e0-247b-48fd-b940-fc3406861fa4 container dapi-container: STEP: delete the pod Apr 8 22:05:18.505: INFO: Waiting for pod downward-api-6cfcc4e0-247b-48fd-b940-fc3406861fa4 to disappear Apr 8 22:05:18.509: INFO: Pod downward-api-6cfcc4e0-247b-48fd-b940-fc3406861fa4 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 22:05:18.509: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2936" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":278,"completed":206,"skipped":3482,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 22:05:18.515: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 22:05:33.812: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-5333" for this suite. STEP: Destroying namespace "nsdeletetest-5768" for this suite. Apr 8 22:05:33.830: INFO: Namespace nsdeletetest-5768 was already deleted STEP: Destroying namespace "nsdeletetest-2425" for this suite. • [SLOW TEST:15.318 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":278,"completed":207,"skipped":3511,"failed":0} SSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 22:05:33.834: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1357 STEP: creating an pod Apr 8 22:05:33.909: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run logs-generator --generator=run-pod/v1 --image=gcr.io/kubernetes-e2e-test-images/agnhost:2.8 --namespace=kubectl-20 -- logs-generator --log-lines-total 100 --run-duration 20s' Apr 8 22:05:34.036: INFO: stderr: "" Apr 8 22:05:34.036: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Waiting for log generator to start. Apr 8 22:05:34.036: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] Apr 8 22:05:34.036: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-20" to be "running and ready, or succeeded" Apr 8 22:05:34.053: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 16.958095ms Apr 8 22:05:36.058: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021411205s Apr 8 22:05:38.062: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 4.025429429s Apr 8 22:05:38.062: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" Apr 8 22:05:38.062: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings Apr 8 22:05:38.062: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-20' Apr 8 22:05:38.179: INFO: stderr: "" Apr 8 22:05:38.179: INFO: stdout: "I0408 22:05:36.128612 1 logs_generator.go:76] 0 GET /api/v1/namespaces/kube-system/pods/cg78 507\nI0408 22:05:36.328790 1 logs_generator.go:76] 1 PUT /api/v1/namespaces/ns/pods/42g 410\nI0408 22:05:36.528774 1 logs_generator.go:76] 2 PUT /api/v1/namespaces/default/pods/kvd 448\nI0408 22:05:36.728832 1 logs_generator.go:76] 3 POST /api/v1/namespaces/ns/pods/zkds 339\nI0408 22:05:36.928933 1 logs_generator.go:76] 4 GET /api/v1/namespaces/kube-system/pods/lwc 517\nI0408 22:05:37.128869 1 logs_generator.go:76] 5 POST /api/v1/namespaces/kube-system/pods/fm8s 455\nI0408 22:05:37.328811 1 logs_generator.go:76] 6 POST /api/v1/namespaces/kube-system/pods/rj5r 284\nI0408 22:05:37.528796 1 logs_generator.go:76] 7 GET /api/v1/namespaces/ns/pods/4wq 291\nI0408 22:05:37.728785 1 logs_generator.go:76] 8 POST /api/v1/namespaces/default/pods/vf8t 423\nI0408 22:05:37.928854 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/ns/pods/2rn 265\nI0408 22:05:38.128820 1 logs_generator.go:76] 10 POST /api/v1/namespaces/ns/pods/2p7s 203\n" STEP: limiting log lines Apr 8 22:05:38.180: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-20 --tail=1' Apr 8 22:05:38.285: INFO: stderr: "" Apr 8 22:05:38.285: INFO: stdout: "I0408 22:05:38.128820 1 logs_generator.go:76] 10 POST /api/v1/namespaces/ns/pods/2p7s 203\n" Apr 8 22:05:38.285: INFO: got output "I0408 22:05:38.128820 1 logs_generator.go:76] 10 POST /api/v1/namespaces/ns/pods/2p7s 203\n" STEP: limiting log bytes Apr 8 22:05:38.286: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-20 --limit-bytes=1' Apr 8 22:05:38.398: INFO: stderr: "" Apr 8 22:05:38.398: INFO: stdout: "I" Apr 8 22:05:38.398: INFO: got output "I" STEP: exposing timestamps Apr 8 22:05:38.398: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-20 --tail=1 --timestamps' Apr 8 22:05:38.504: INFO: stderr: "" Apr 8 22:05:38.504: INFO: stdout: "2020-04-08T22:05:38.328953837Z I0408 22:05:38.328775 1 logs_generator.go:76] 11 POST /api/v1/namespaces/kube-system/pods/qrlc 369\n" Apr 8 22:05:38.504: INFO: got output "2020-04-08T22:05:38.328953837Z I0408 22:05:38.328775 1 logs_generator.go:76] 11 POST /api/v1/namespaces/kube-system/pods/qrlc 369\n" STEP: restricting to a time range Apr 8 22:05:41.004: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-20 --since=1s' Apr 8 22:05:41.122: INFO: stderr: "" Apr 8 22:05:41.122: INFO: stdout: "I0408 22:05:40.128791 1 logs_generator.go:76] 20 POST /api/v1/namespaces/default/pods/bl4k 400\nI0408 22:05:40.328839 1 logs_generator.go:76] 21 GET /api/v1/namespaces/ns/pods/nzf 451\nI0408 22:05:40.528800 1 logs_generator.go:76] 22 GET /api/v1/namespaces/default/pods/jkwn 293\nI0408 22:05:40.728752 1 logs_generator.go:76] 23 PUT /api/v1/namespaces/ns/pods/xjsk 255\nI0408 22:05:40.928794 1 logs_generator.go:76] 24 PUT /api/v1/namespaces/kube-system/pods/kcvr 412\n" Apr 8 22:05:41.122: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-20 --since=24h' Apr 8 22:05:41.238: INFO: stderr: "" Apr 8 22:05:41.238: INFO: stdout: "I0408 22:05:36.128612 1 logs_generator.go:76] 0 GET /api/v1/namespaces/kube-system/pods/cg78 507\nI0408 22:05:36.328790 1 logs_generator.go:76] 1 PUT /api/v1/namespaces/ns/pods/42g 410\nI0408 22:05:36.528774 1 logs_generator.go:76] 2 PUT /api/v1/namespaces/default/pods/kvd 448\nI0408 22:05:36.728832 1 logs_generator.go:76] 3 POST /api/v1/namespaces/ns/pods/zkds 339\nI0408 22:05:36.928933 1 logs_generator.go:76] 4 GET /api/v1/namespaces/kube-system/pods/lwc 517\nI0408 22:05:37.128869 1 logs_generator.go:76] 5 POST /api/v1/namespaces/kube-system/pods/fm8s 455\nI0408 22:05:37.328811 1 logs_generator.go:76] 6 POST /api/v1/namespaces/kube-system/pods/rj5r 284\nI0408 22:05:37.528796 1 logs_generator.go:76] 7 GET /api/v1/namespaces/ns/pods/4wq 291\nI0408 22:05:37.728785 1 logs_generator.go:76] 8 POST /api/v1/namespaces/default/pods/vf8t 423\nI0408 22:05:37.928854 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/ns/pods/2rn 265\nI0408 22:05:38.128820 1 logs_generator.go:76] 10 POST /api/v1/namespaces/ns/pods/2p7s 203\nI0408 22:05:38.328775 1 logs_generator.go:76] 11 POST /api/v1/namespaces/kube-system/pods/qrlc 369\nI0408 22:05:38.528803 1 logs_generator.go:76] 12 PUT /api/v1/namespaces/kube-system/pods/6g5 404\nI0408 22:05:38.728798 1 logs_generator.go:76] 13 PUT /api/v1/namespaces/default/pods/gnkz 504\nI0408 22:05:38.928818 1 logs_generator.go:76] 14 POST /api/v1/namespaces/kube-system/pods/w2n 552\nI0408 22:05:39.128819 1 logs_generator.go:76] 15 POST /api/v1/namespaces/ns/pods/mrw4 250\nI0408 22:05:39.328803 1 logs_generator.go:76] 16 GET /api/v1/namespaces/kube-system/pods/hms 320\nI0408 22:05:39.528787 1 logs_generator.go:76] 17 PUT /api/v1/namespaces/kube-system/pods/4kd 343\nI0408 22:05:39.728799 1 logs_generator.go:76] 18 PUT /api/v1/namespaces/default/pods/4fh 271\nI0408 22:05:39.928806 1 logs_generator.go:76] 19 POST /api/v1/namespaces/ns/pods/9fp4 200\nI0408 22:05:40.128791 1 logs_generator.go:76] 20 POST /api/v1/namespaces/default/pods/bl4k 400\nI0408 22:05:40.328839 1 logs_generator.go:76] 21 GET /api/v1/namespaces/ns/pods/nzf 451\nI0408 22:05:40.528800 1 logs_generator.go:76] 22 GET /api/v1/namespaces/default/pods/jkwn 293\nI0408 22:05:40.728752 1 logs_generator.go:76] 23 PUT /api/v1/namespaces/ns/pods/xjsk 255\nI0408 22:05:40.928794 1 logs_generator.go:76] 24 PUT /api/v1/namespaces/kube-system/pods/kcvr 412\nI0408 22:05:41.128779 1 logs_generator.go:76] 25 POST /api/v1/namespaces/default/pods/4xr 584\n" [AfterEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1363 Apr 8 22:05:41.238: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-20' Apr 8 22:05:43.887: INFO: stderr: "" Apr 8 22:05:43.887: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 22:05:43.887: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-20" for this suite. • [SLOW TEST:10.062 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1353 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":278,"completed":208,"skipped":3518,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 22:05:43.898: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name cm-test-opt-del-a578373a-bd8f-411a-8be4-bb0aa53178d1 STEP: Creating configMap with name cm-test-opt-upd-0e395815-8d0b-4f1e-87df-920e5aadcfa7 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-a578373a-bd8f-411a-8be4-bb0aa53178d1 STEP: Updating configmap cm-test-opt-upd-0e395815-8d0b-4f1e-87df-920e5aadcfa7 STEP: Creating configMap with name cm-test-opt-create-8d4962e7-3890-4ed1-aa9f-7d7f56a5058c STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 22:07:10.466: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6258" for this suite. • [SLOW TEST:86.576 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":209,"skipped":3548,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 22:07:10.474: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Apr 8 22:07:15.578: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 22:07:15.612: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-6370" for this suite. • [SLOW TEST:5.198 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":278,"completed":210,"skipped":3583,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 22:07:15.672: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on node default medium Apr 8 22:07:15.821: INFO: Waiting up to 5m0s for pod "pod-2227ed7c-1bb8-432e-ae6f-1cf27205c0d7" in namespace "emptydir-1430" to be "success or failure" Apr 8 22:07:15.852: INFO: Pod "pod-2227ed7c-1bb8-432e-ae6f-1cf27205c0d7": Phase="Pending", Reason="", readiness=false. Elapsed: 31.908134ms Apr 8 22:07:17.870: INFO: Pod "pod-2227ed7c-1bb8-432e-ae6f-1cf27205c0d7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049165302s Apr 8 22:07:19.874: INFO: Pod "pod-2227ed7c-1bb8-432e-ae6f-1cf27205c0d7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.053493804s STEP: Saw pod success Apr 8 22:07:19.874: INFO: Pod "pod-2227ed7c-1bb8-432e-ae6f-1cf27205c0d7" satisfied condition "success or failure" Apr 8 22:07:19.877: INFO: Trying to get logs from node jerma-worker2 pod pod-2227ed7c-1bb8-432e-ae6f-1cf27205c0d7 container test-container: STEP: delete the pod Apr 8 22:07:19.897: INFO: Waiting for pod pod-2227ed7c-1bb8-432e-ae6f-1cf27205c0d7 to disappear Apr 8 22:07:19.901: INFO: Pod pod-2227ed7c-1bb8-432e-ae6f-1cf27205c0d7 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 22:07:19.901: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1430" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":211,"skipped":3603,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 22:07:19.909: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap configmap-6376/configmap-test-e05e0d30-6678-4f59-898a-e81c17c78948 STEP: Creating a pod to test consume configMaps Apr 8 22:07:19.994: INFO: Waiting up to 5m0s for pod "pod-configmaps-df1c6264-7432-4098-82fa-5ecf296a5546" in namespace "configmap-6376" to be "success or failure" Apr 8 22:07:19.996: INFO: Pod "pod-configmaps-df1c6264-7432-4098-82fa-5ecf296a5546": Phase="Pending", Reason="", readiness=false. Elapsed: 2.705494ms Apr 8 22:07:22.000: INFO: Pod "pod-configmaps-df1c6264-7432-4098-82fa-5ecf296a5546": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006720794s Apr 8 22:07:24.005: INFO: Pod "pod-configmaps-df1c6264-7432-4098-82fa-5ecf296a5546": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011361701s STEP: Saw pod success Apr 8 22:07:24.005: INFO: Pod "pod-configmaps-df1c6264-7432-4098-82fa-5ecf296a5546" satisfied condition "success or failure" Apr 8 22:07:24.008: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-df1c6264-7432-4098-82fa-5ecf296a5546 container env-test: STEP: delete the pod Apr 8 22:07:24.028: INFO: Waiting for pod pod-configmaps-df1c6264-7432-4098-82fa-5ecf296a5546 to disappear Apr 8 22:07:24.033: INFO: Pod pod-configmaps-df1c6264-7432-4098-82fa-5ecf296a5546 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 22:07:24.033: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6376" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":212,"skipped":3649,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 22:07:24.067: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-35 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 8 22:07:24.207: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Apr 8 22:07:48.335: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.223:8080/dial?request=hostname&protocol=http&host=10.244.1.222&port=8080&tries=1'] Namespace:pod-network-test-35 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 8 22:07:48.335: INFO: >>> kubeConfig: /root/.kube/config I0408 22:07:48.366504 6 log.go:172] (0xc001690580) (0xc000f803c0) Create stream I0408 22:07:48.366538 6 log.go:172] (0xc001690580) (0xc000f803c0) Stream added, broadcasting: 1 I0408 22:07:48.368731 6 log.go:172] (0xc001690580) Reply frame received for 1 I0408 22:07:48.368778 6 log.go:172] (0xc001690580) (0xc002806280) Create stream I0408 22:07:48.368812 6 log.go:172] (0xc001690580) (0xc002806280) Stream added, broadcasting: 3 I0408 22:07:48.370092 6 log.go:172] (0xc001690580) Reply frame received for 3 I0408 22:07:48.370135 6 log.go:172] (0xc001690580) (0xc002806460) Create stream I0408 22:07:48.370152 6 log.go:172] (0xc001690580) (0xc002806460) Stream added, broadcasting: 5 I0408 22:07:48.371260 6 log.go:172] (0xc001690580) Reply frame received for 5 I0408 22:07:48.458820 6 log.go:172] (0xc001690580) Data frame received for 3 I0408 22:07:48.458842 6 log.go:172] (0xc002806280) (3) Data frame handling I0408 22:07:48.458858 6 log.go:172] (0xc002806280) (3) Data frame sent I0408 22:07:48.459635 6 log.go:172] (0xc001690580) Data frame received for 3 I0408 22:07:48.459667 6 log.go:172] (0xc002806280) (3) Data frame handling I0408 22:07:48.459696 6 log.go:172] (0xc001690580) Data frame received for 5 I0408 22:07:48.459711 6 log.go:172] (0xc002806460) (5) Data frame handling I0408 22:07:48.461490 6 log.go:172] (0xc001690580) Data frame received for 1 I0408 22:07:48.461517 6 log.go:172] (0xc000f803c0) (1) Data frame handling I0408 22:07:48.461532 6 log.go:172] (0xc000f803c0) (1) Data frame sent I0408 22:07:48.461542 6 log.go:172] (0xc001690580) (0xc000f803c0) Stream removed, broadcasting: 1 I0408 22:07:48.461603 6 log.go:172] (0xc001690580) (0xc000f803c0) Stream removed, broadcasting: 1 I0408 22:07:48.461618 6 log.go:172] (0xc001690580) (0xc002806280) Stream removed, broadcasting: 3 I0408 22:07:48.461714 6 log.go:172] (0xc001690580) Go away received I0408 22:07:48.461825 6 log.go:172] (0xc001690580) (0xc002806460) Stream removed, broadcasting: 5 Apr 8 22:07:48.461: INFO: Waiting for responses: map[] Apr 8 22:07:48.464: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.223:8080/dial?request=hostname&protocol=http&host=10.244.2.45&port=8080&tries=1'] Namespace:pod-network-test-35 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 8 22:07:48.465: INFO: >>> kubeConfig: /root/.kube/config I0408 22:07:48.498071 6 log.go:172] (0xc0016c0580) (0xc0028074a0) Create stream I0408 22:07:48.498101 6 log.go:172] (0xc0016c0580) (0xc0028074a0) Stream added, broadcasting: 1 I0408 22:07:48.500416 6 log.go:172] (0xc0016c0580) Reply frame received for 1 I0408 22:07:48.500465 6 log.go:172] (0xc0016c0580) (0xc000c40aa0) Create stream I0408 22:07:48.500480 6 log.go:172] (0xc0016c0580) (0xc000c40aa0) Stream added, broadcasting: 3 I0408 22:07:48.501404 6 log.go:172] (0xc0016c0580) Reply frame received for 3 I0408 22:07:48.501432 6 log.go:172] (0xc0016c0580) (0xc002807720) Create stream I0408 22:07:48.501442 6 log.go:172] (0xc0016c0580) (0xc002807720) Stream added, broadcasting: 5 I0408 22:07:48.502313 6 log.go:172] (0xc0016c0580) Reply frame received for 5 I0408 22:07:48.562025 6 log.go:172] (0xc0016c0580) Data frame received for 3 I0408 22:07:48.562062 6 log.go:172] (0xc000c40aa0) (3) Data frame handling I0408 22:07:48.562083 6 log.go:172] (0xc000c40aa0) (3) Data frame sent I0408 22:07:48.562474 6 log.go:172] (0xc0016c0580) Data frame received for 3 I0408 22:07:48.562509 6 log.go:172] (0xc000c40aa0) (3) Data frame handling I0408 22:07:48.562536 6 log.go:172] (0xc0016c0580) Data frame received for 5 I0408 22:07:48.562549 6 log.go:172] (0xc002807720) (5) Data frame handling I0408 22:07:48.564045 6 log.go:172] (0xc0016c0580) Data frame received for 1 I0408 22:07:48.564105 6 log.go:172] (0xc0028074a0) (1) Data frame handling I0408 22:07:48.564137 6 log.go:172] (0xc0028074a0) (1) Data frame sent I0408 22:07:48.564170 6 log.go:172] (0xc0016c0580) (0xc0028074a0) Stream removed, broadcasting: 1 I0408 22:07:48.564223 6 log.go:172] (0xc0016c0580) Go away received I0408 22:07:48.564310 6 log.go:172] (0xc0016c0580) (0xc0028074a0) Stream removed, broadcasting: 1 I0408 22:07:48.564339 6 log.go:172] (0xc0016c0580) (0xc000c40aa0) Stream removed, broadcasting: 3 I0408 22:07:48.564360 6 log.go:172] (0xc0016c0580) (0xc002807720) Stream removed, broadcasting: 5 Apr 8 22:07:48.564: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 22:07:48.564: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-35" for this suite. • [SLOW TEST:24.505 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":213,"skipped":3664,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 22:07:48.572: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:46 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: setting up selector STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes Apr 8 22:07:52.686: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0' STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Apr 8 22:08:02.804: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 22:08:02.808: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2875" for this suite. • [SLOW TEST:14.248 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance]","total":278,"completed":214,"skipped":3677,"failed":0} SS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 22:08:02.821: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-4d3ab0fe-6157-4c2c-bf45-960c60bf5c12 STEP: Creating a pod to test consume configMaps Apr 8 22:08:02.952: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-78b035b7-fd7a-4c3f-b28a-6cc05f41991d" in namespace "projected-7383" to be "success or failure" Apr 8 22:08:02.966: INFO: Pod "pod-projected-configmaps-78b035b7-fd7a-4c3f-b28a-6cc05f41991d": Phase="Pending", Reason="", readiness=false. Elapsed: 14.284351ms Apr 8 22:08:04.970: INFO: Pod "pod-projected-configmaps-78b035b7-fd7a-4c3f-b28a-6cc05f41991d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01876629s Apr 8 22:08:06.974: INFO: Pod "pod-projected-configmaps-78b035b7-fd7a-4c3f-b28a-6cc05f41991d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02231385s STEP: Saw pod success Apr 8 22:08:06.974: INFO: Pod "pod-projected-configmaps-78b035b7-fd7a-4c3f-b28a-6cc05f41991d" satisfied condition "success or failure" Apr 8 22:08:06.984: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-78b035b7-fd7a-4c3f-b28a-6cc05f41991d container projected-configmap-volume-test: STEP: delete the pod Apr 8 22:08:07.012: INFO: Waiting for pod pod-projected-configmaps-78b035b7-fd7a-4c3f-b28a-6cc05f41991d to disappear Apr 8 22:08:07.031: INFO: Pod pod-projected-configmaps-78b035b7-fd7a-4c3f-b28a-6cc05f41991d no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 22:08:07.031: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7383" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":278,"completed":215,"skipped":3679,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 22:08:07.039: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 22:08:12.644: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-5940" for this suite. • [SLOW TEST:5.707 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":278,"completed":216,"skipped":3703,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 22:08:12.746: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 8 22:08:12.818: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Apr 8 22:08:12.836: INFO: Pod name sample-pod: Found 0 pods out of 1 Apr 8 22:08:17.839: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Apr 8 22:08:17.840: INFO: Creating deployment "test-rolling-update-deployment" Apr 8 22:08:17.843: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Apr 8 22:08:17.855: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Apr 8 22:08:19.861: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Apr 8 22:08:19.863: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721980497, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721980497, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721980497, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721980497, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 8 22:08:21.867: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Apr 8 22:08:21.874: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-4665 /apis/apps/v1/namespaces/deployment-4665/deployments/test-rolling-update-deployment 47345996-03bc-4a06-9d3a-c154a9d12db4 6522247 1 2020-04-08 22:08:17 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00354e0a8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-04-08 22:08:17 +0000 UTC,LastTransitionTime:2020-04-08 22:08:17 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-67cf4f6444" has successfully progressed.,LastUpdateTime:2020-04-08 22:08:20 +0000 UTC,LastTransitionTime:2020-04-08 22:08:17 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Apr 8 22:08:21.877: INFO: New ReplicaSet "test-rolling-update-deployment-67cf4f6444" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-67cf4f6444 deployment-4665 /apis/apps/v1/namespaces/deployment-4665/replicasets/test-rolling-update-deployment-67cf4f6444 e22304d3-d863-40b8-858c-eda5844f0237 6522236 1 2020-04-08 22:08:17 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment 47345996-03bc-4a06-9d3a-c154a9d12db4 0xc00354e557 0xc00354e558}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 67cf4f6444,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00354e5c8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Apr 8 22:08:21.877: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Apr 8 22:08:21.877: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-4665 /apis/apps/v1/namespaces/deployment-4665/replicasets/test-rolling-update-controller 9f9888d2-8954-4588-b1b3-b8a621242909 6522245 2 2020-04-08 22:08:12 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment 47345996-03bc-4a06-9d3a-c154a9d12db4 0xc00354e487 0xc00354e488}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc00354e4e8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 8 22:08:21.880: INFO: Pod "test-rolling-update-deployment-67cf4f6444-qqlgz" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-67cf4f6444-qqlgz test-rolling-update-deployment-67cf4f6444- deployment-4665 /api/v1/namespaces/deployment-4665/pods/test-rolling-update-deployment-67cf4f6444-qqlgz a6176a79-53ee-40b1-bd87-303c56700fec 6522235 0 2020-04-08 22:08:17 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-67cf4f6444 e22304d3-d863-40b8-858c-eda5844f0237 0xc003485ce7 0xc003485ce8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-mnrzb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-mnrzb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-mnrzb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 22:08:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 22:08:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 22:08:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 22:08:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.226,StartTime:2020-04-08 22:08:17 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-08 22:08:20 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://546b1145cc489f79d352eeafb905ae0cbe8581996eb25253ec109bb3ddd7d527,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.226,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 22:08:21.880: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-4665" for this suite. • [SLOW TEST:9.140 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":217,"skipped":3715,"failed":0} SS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 22:08:21.887: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 22:08:38.219: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-728" for this suite. • [SLOW TEST:16.341 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":278,"completed":218,"skipped":3717,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 22:08:38.228: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 8 22:08:38.281: INFO: Creating deployment "webserver-deployment" Apr 8 22:08:38.337: INFO: Waiting for observed generation 1 Apr 8 22:08:40.344: INFO: Waiting for all required pods to come up Apr 8 22:08:40.347: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running Apr 8 22:08:48.355: INFO: Waiting for deployment "webserver-deployment" to complete Apr 8 22:08:48.360: INFO: Updating deployment "webserver-deployment" with a non-existent image Apr 8 22:08:48.366: INFO: Updating deployment webserver-deployment Apr 8 22:08:48.366: INFO: Waiting for observed generation 2 Apr 8 22:08:50.413: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Apr 8 22:08:50.415: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Apr 8 22:08:50.417: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Apr 8 22:08:50.424: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Apr 8 22:08:50.424: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Apr 8 22:08:50.426: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Apr 8 22:08:50.430: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas Apr 8 22:08:50.430: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 Apr 8 22:08:50.435: INFO: Updating deployment webserver-deployment Apr 8 22:08:50.435: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas Apr 8 22:08:50.484: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Apr 8 22:08:50.511: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Apr 8 22:08:50.812: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-4572 /apis/apps/v1/namespaces/deployment-4572/deployments/webserver-deployment e38fb799-2732-472c-ab7a-e13efa478c69 6522604 3 2020-04-08 22:08:38 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00352f4b8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-c7997dcc8" is progressing.,LastUpdateTime:2020-04-08 22:08:48 +0000 UTC,LastTransitionTime:2020-04-08 22:08:38 +0000 UTC,},DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-04-08 22:08:50 +0000 UTC,LastTransitionTime:2020-04-08 22:08:50 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} Apr 8 22:08:50.843: INFO: New ReplicaSet "webserver-deployment-c7997dcc8" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-c7997dcc8 deployment-4572 /apis/apps/v1/namespaces/deployment-4572/replicasets/webserver-deployment-c7997dcc8 99bc5094-c9f4-45cd-a511-0a87d57b7004 6522587 3 2020-04-08 22:08:48 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment e38fb799-2732-472c-ab7a-e13efa478c69 0xc00352f997 0xc00352f998}] [] []},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: c7997dcc8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00352fa08 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 8 22:08:50.843: INFO: All old ReplicaSets of Deployment "webserver-deployment": Apr 8 22:08:50.843: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-595b5b9587 deployment-4572 /apis/apps/v1/namespaces/deployment-4572/replicasets/webserver-deployment-595b5b9587 2449c4bf-2ce8-40fb-b3f3-1be6462d793a 6522639 3 2020-04-08 22:08:38 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment e38fb799-2732-472c-ab7a-e13efa478c69 0xc00352f8d7 0xc00352f8d8}] [] []},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 595b5b9587,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00352f938 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} Apr 8 22:08:50.940: INFO: Pod "webserver-deployment-595b5b9587-2q4rh" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-2q4rh webserver-deployment-595b5b9587- deployment-4572 /api/v1/namespaces/deployment-4572/pods/webserver-deployment-595b5b9587-2q4rh 523cfe46-1e16-4e84-9b98-b80f7ddace22 6522518 0 2020-04-08 22:08:38 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 2449c4bf-2ce8-40fb-b3f3-1be6462d793a 0xc00352fea7 0xc00352fea8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-m9qdg,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-m9qdg,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-m9qdg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 22:08:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 22:08:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 22:08:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 22:08:38 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.233,StartTime:2020-04-08 22:08:38 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-08 22:08:46 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://37ddcfbf0ca2d78481a296786cafb138df44ac6ebf6bea7fed20bcff8cc828c4,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.233,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 8 22:08:50.941: INFO: Pod "webserver-deployment-595b5b9587-2vv57" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-2vv57 webserver-deployment-595b5b9587- deployment-4572 /api/v1/namespaces/deployment-4572/pods/webserver-deployment-595b5b9587-2vv57 21a42687-f2ec-4e85-af75-4bd80ea7ead4 6522466 0 2020-04-08 22:08:38 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 2449c4bf-2ce8-40fb-b3f3-1be6462d793a 0xc005dec027 0xc005dec028}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-m9qdg,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-m9qdg,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-m9qdg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 22:08:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 22:08:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 22:08:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 22:08:38 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.48,StartTime:2020-04-08 22:08:38 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-08 22:08:44 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://6600e202fbca76edc16e82e3688bc30d15225e24f8a47bdc71303b46e373c2eb,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.48,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 8 22:08:50.941: INFO: Pod "webserver-deployment-595b5b9587-2whf4" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-2whf4 webserver-deployment-595b5b9587- deployment-4572 /api/v1/namespaces/deployment-4572/pods/webserver-deployment-595b5b9587-2whf4 8e19bcd0-dbcc-428e-bf3c-48ed13dbbd27 6522629 0 2020-04-08 22:08:50 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 2449c4bf-2ce8-40fb-b3f3-1be6462d793a 0xc005dec1a7 0xc005dec1a8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-m9qdg,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-m9qdg,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-m9qdg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 22:08:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 8 22:08:50.941: INFO: Pod "webserver-deployment-595b5b9587-4fhr9" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-4fhr9 webserver-deployment-595b5b9587- deployment-4572 /api/v1/namespaces/deployment-4572/pods/webserver-deployment-595b5b9587-4fhr9 5690c685-db7a-455c-b83a-2ad84b18f857 6522608 0 2020-04-08 22:08:50 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 2449c4bf-2ce8-40fb-b3f3-1be6462d793a 0xc005dec2c7 0xc005dec2c8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-m9qdg,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-m9qdg,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-m9qdg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 22:08:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 8 22:08:50.942: INFO: Pod "webserver-deployment-595b5b9587-4lbwd" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-4lbwd webserver-deployment-595b5b9587- deployment-4572 /api/v1/namespaces/deployment-4572/pods/webserver-deployment-595b5b9587-4lbwd 0a23f9fc-89a5-4a90-b930-0433ccd5bedf 6522628 0 2020-04-08 22:08:50 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 2449c4bf-2ce8-40fb-b3f3-1be6462d793a 0xc005dec3e7 0xc005dec3e8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-m9qdg,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-m9qdg,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-m9qdg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 22:08:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 8 22:08:50.942: INFO: Pod "webserver-deployment-595b5b9587-55ldj" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-55ldj webserver-deployment-595b5b9587- deployment-4572 /api/v1/namespaces/deployment-4572/pods/webserver-deployment-595b5b9587-55ldj c6f63cc8-4288-4b2a-b867-de74314f7329 6522616 0 2020-04-08 22:08:50 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 2449c4bf-2ce8-40fb-b3f3-1be6462d793a 0xc005dec507 0xc005dec508}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-m9qdg,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-m9qdg,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-m9qdg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 22:08:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 8 22:08:50.942: INFO: Pod "webserver-deployment-595b5b9587-6tfqs" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-6tfqs webserver-deployment-595b5b9587- deployment-4572 /api/v1/namespaces/deployment-4572/pods/webserver-deployment-595b5b9587-6tfqs a14641f0-0eb6-4fc7-a58a-509d9559b3a6 6522610 0 2020-04-08 22:08:50 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 2449c4bf-2ce8-40fb-b3f3-1be6462d793a 0xc005dec627 0xc005dec628}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-m9qdg,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-m9qdg,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-m9qdg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 22:08:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 8 22:08:50.942: INFO: Pod "webserver-deployment-595b5b9587-8t9xs" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-8t9xs webserver-deployment-595b5b9587- deployment-4572 /api/v1/namespaces/deployment-4572/pods/webserver-deployment-595b5b9587-8t9xs f8845ccc-70a8-49f4-a736-f2c25427ea3b 6522439 0 2020-04-08 22:08:38 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 2449c4bf-2ce8-40fb-b3f3-1be6462d793a 0xc005dec747 0xc005dec748}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-m9qdg,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-m9qdg,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-m9qdg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 22:08:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 22:08:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 22:08:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 22:08:38 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.47,StartTime:2020-04-08 22:08:38 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-08 22:08:41 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://e9d53347ced7f1134a3341d3911b355be3433ac127e50610ecdf0965429a279a,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.47,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 8 22:08:50.943: INFO: Pod "webserver-deployment-595b5b9587-b4v6v" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-b4v6v webserver-deployment-595b5b9587- deployment-4572 /api/v1/namespaces/deployment-4572/pods/webserver-deployment-595b5b9587-b4v6v 3881e878-55d0-48ac-bf21-e3ae24a49f5d 6522595 0 2020-04-08 22:08:50 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 2449c4bf-2ce8-40fb-b3f3-1be6462d793a 0xc005dec8c7 0xc005dec8c8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-m9qdg,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-m9qdg,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-m9qdg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 22:08:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 8 22:08:50.943: INFO: Pod "webserver-deployment-595b5b9587-dlvmv" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-dlvmv webserver-deployment-595b5b9587- deployment-4572 /api/v1/namespaces/deployment-4572/pods/webserver-deployment-595b5b9587-dlvmv 6fc4a34b-2186-4fb2-9488-65cddce47bcb 6522625 0 2020-04-08 22:08:50 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 2449c4bf-2ce8-40fb-b3f3-1be6462d793a 0xc005dec9e7 0xc005dec9e8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-m9qdg,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-m9qdg,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-m9qdg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 22:08:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 8 22:08:50.943: INFO: Pod "webserver-deployment-595b5b9587-dvpfz" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-dvpfz webserver-deployment-595b5b9587- deployment-4572 /api/v1/namespaces/deployment-4572/pods/webserver-deployment-595b5b9587-dvpfz ba380c50-a119-47f7-ab41-e30b244e01df 6522640 0 2020-04-08 22:08:50 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 2449c4bf-2ce8-40fb-b3f3-1be6462d793a 0xc005decb07 0xc005decb08}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-m9qdg,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-m9qdg,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-m9qdg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 22:08:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 22:08:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 22:08:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 22:08:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-04-08 22:08:50 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 8 22:08:50.943: INFO: Pod "webserver-deployment-595b5b9587-hdln2" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-hdln2 webserver-deployment-595b5b9587- deployment-4572 /api/v1/namespaces/deployment-4572/pods/webserver-deployment-595b5b9587-hdln2 8a882081-bd74-442b-a982-0470eb7fb24a 6522511 0 2020-04-08 22:08:38 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 2449c4bf-2ce8-40fb-b3f3-1be6462d793a 0xc005decc67 0xc005decc68}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-m9qdg,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-m9qdg,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-m9qdg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 22:08:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 22:08:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 22:08:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 22:08:38 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.51,StartTime:2020-04-08 22:08:38 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-08 22:08:46 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://416d6a7957494f14ef6ab2dd3e1fda5f20b80a596d6a6de5b1689caa678c199d,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.51,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 8 22:08:50.943: INFO: Pod "webserver-deployment-595b5b9587-jp6zm" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-jp6zm webserver-deployment-595b5b9587- deployment-4572 /api/v1/namespaces/deployment-4572/pods/webserver-deployment-595b5b9587-jp6zm fc24d31a-ff1b-4701-8433-520850034294 6522627 0 2020-04-08 22:08:50 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 2449c4bf-2ce8-40fb-b3f3-1be6462d793a 0xc005decde7 0xc005decde8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-m9qdg,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-m9qdg,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-m9qdg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 22:08:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 8 22:08:50.944: INFO: Pod "webserver-deployment-595b5b9587-kvrt6" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-kvrt6 webserver-deployment-595b5b9587- deployment-4572 /api/v1/namespaces/deployment-4572/pods/webserver-deployment-595b5b9587-kvrt6 ac39cced-4f91-45d0-91ba-65ebc59e6d70 6522623 0 2020-04-08 22:08:50 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 2449c4bf-2ce8-40fb-b3f3-1be6462d793a 0xc005decf07 0xc005decf08}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-m9qdg,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-m9qdg,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-m9qdg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 22:08:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 8 22:08:50.944: INFO: Pod "webserver-deployment-595b5b9587-nf285" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-nf285 webserver-deployment-595b5b9587- deployment-4572 /api/v1/namespaces/deployment-4572/pods/webserver-deployment-595b5b9587-nf285 f732194e-4a09-44fd-9ad6-25f9eac72579 6522468 0 2020-04-08 22:08:38 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 2449c4bf-2ce8-40fb-b3f3-1be6462d793a 0xc005ded027 0xc005ded028}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-m9qdg,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-m9qdg,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-m9qdg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 22:08:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 22:08:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 22:08:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 22:08:38 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.230,StartTime:2020-04-08 22:08:38 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-08 22:08:44 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://db0f54aecf8fa6dc4656ec2d1731e7b337094b39430c92b65989d5abca884f81,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.230,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 8 22:08:50.944: INFO: Pod "webserver-deployment-595b5b9587-sn74h" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-sn74h webserver-deployment-595b5b9587- deployment-4572 /api/v1/namespaces/deployment-4572/pods/webserver-deployment-595b5b9587-sn74h 9dfeb50d-298f-4d08-a4a5-e2b5424b2b93 6522460 0 2020-04-08 22:08:38 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 2449c4bf-2ce8-40fb-b3f3-1be6462d793a 0xc005ded1a7 0xc005ded1a8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-m9qdg,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-m9qdg,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-m9qdg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 22:08:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 22:08:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 22:08:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 22:08:38 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.229,StartTime:2020-04-08 22:08:38 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-08 22:08:43 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://00499a71167d6fbfc17d2d6315cd008e8c01f98b259c4e9d512a6d99f8fe3ee5,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.229,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 8 22:08:50.945: INFO: Pod "webserver-deployment-595b5b9587-wq4cs" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-wq4cs webserver-deployment-595b5b9587- deployment-4572 /api/v1/namespaces/deployment-4572/pods/webserver-deployment-595b5b9587-wq4cs b8cf80a5-55ff-4dfb-ad3f-c7379d02225c 6522611 0 2020-04-08 22:08:50 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 2449c4bf-2ce8-40fb-b3f3-1be6462d793a 0xc005ded337 0xc005ded338}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-m9qdg,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-m9qdg,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-m9qdg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 22:08:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 8 22:08:50.945: INFO: Pod "webserver-deployment-595b5b9587-ws6h9" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-ws6h9 webserver-deployment-595b5b9587- deployment-4572 /api/v1/namespaces/deployment-4572/pods/webserver-deployment-595b5b9587-ws6h9 58d80fce-200d-4dd8-bad6-f1da7fe3e5a9 6522637 0 2020-04-08 22:08:50 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 2449c4bf-2ce8-40fb-b3f3-1be6462d793a 0xc005ded467 0xc005ded468}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-m9qdg,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-m9qdg,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-m9qdg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 22:08:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 22:08:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 22:08:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 22:08:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-04-08 22:08:50 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 8 22:08:50.945: INFO: Pod "webserver-deployment-595b5b9587-xrqgz" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-xrqgz webserver-deployment-595b5b9587- deployment-4572 /api/v1/namespaces/deployment-4572/pods/webserver-deployment-595b5b9587-xrqgz bb5f8e8b-eb68-4d2b-ba65-d93661d4aa72 6522478 0 2020-04-08 22:08:38 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 2449c4bf-2ce8-40fb-b3f3-1be6462d793a 0xc005ded5c7 0xc005ded5c8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-m9qdg,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-m9qdg,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-m9qdg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 22:08:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 22:08:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 22:08:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 22:08:38 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.49,StartTime:2020-04-08 22:08:38 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-08 22:08:45 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://169992b2ab6c0f14e73aecb5b722a784440c7b003690c82634bfa06b691dde61,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.49,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 8 22:08:50.946: INFO: Pod "webserver-deployment-595b5b9587-znp7c" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-znp7c webserver-deployment-595b5b9587- deployment-4572 /api/v1/namespaces/deployment-4572/pods/webserver-deployment-595b5b9587-znp7c 3c385375-422e-4b2a-9ae6-04829f8df1b8 6522488 0 2020-04-08 22:08:38 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 2449c4bf-2ce8-40fb-b3f3-1be6462d793a 0xc005ded757 0xc005ded758}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-m9qdg,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-m9qdg,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-m9qdg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 22:08:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 22:08:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 22:08:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 22:08:38 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.231,StartTime:2020-04-08 22:08:38 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-08 22:08:46 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://9dc2db0701e8d3dfb3a02bf078bc4b897d0185d8754e7aadeab201c77f465385,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.231,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 8 22:08:50.946: INFO: Pod "webserver-deployment-c7997dcc8-2q5ss" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-2q5ss webserver-deployment-c7997dcc8- deployment-4572 /api/v1/namespaces/deployment-4572/pods/webserver-deployment-c7997dcc8-2q5ss 5eb6a4dc-3614-42aa-a054-c22470b4c265 6522607 0 2020-04-08 22:08:50 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 99bc5094-c9f4-45cd-a511-0a87d57b7004 0xc005ded8d7 0xc005ded8d8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-m9qdg,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-m9qdg,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-m9qdg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 22:08:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 8 22:08:50.946: INFO: Pod "webserver-deployment-c7997dcc8-4lp8v" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-4lp8v webserver-deployment-c7997dcc8- deployment-4572 /api/v1/namespaces/deployment-4572/pods/webserver-deployment-c7997dcc8-4lp8v 94e34b9e-7af0-43c2-b495-0b4f947635c6 6522624 0 2020-04-08 22:08:50 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 99bc5094-c9f4-45cd-a511-0a87d57b7004 0xc005deda07 0xc005deda08}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-m9qdg,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-m9qdg,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-m9qdg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 22:08:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 8 22:08:50.946: INFO: Pod "webserver-deployment-c7997dcc8-9fnt4" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-9fnt4 webserver-deployment-c7997dcc8- deployment-4572 /api/v1/namespaces/deployment-4572/pods/webserver-deployment-c7997dcc8-9fnt4 14ebab79-833b-4757-879c-77f387429746 6522545 0 2020-04-08 22:08:48 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 99bc5094-c9f4-45cd-a511-0a87d57b7004 0xc005dedb37 0xc005dedb38}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-m9qdg,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-m9qdg,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-m9qdg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 22:08:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 22:08:48 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 22:08:48 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 22:08:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-04-08 22:08:48 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 8 22:08:50.946: INFO: Pod "webserver-deployment-c7997dcc8-cwvqx" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-cwvqx webserver-deployment-c7997dcc8- deployment-4572 /api/v1/namespaces/deployment-4572/pods/webserver-deployment-c7997dcc8-cwvqx 3a236868-8c3e-4d64-8015-1e6c929a21e8 6522631 0 2020-04-08 22:08:50 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 99bc5094-c9f4-45cd-a511-0a87d57b7004 0xc005dedcb7 0xc005dedcb8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-m9qdg,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-m9qdg,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-m9qdg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 22:08:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 8 22:08:50.947: INFO: Pod "webserver-deployment-c7997dcc8-hfqf9" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-hfqf9 webserver-deployment-c7997dcc8- deployment-4572 /api/v1/namespaces/deployment-4572/pods/webserver-deployment-c7997dcc8-hfqf9 2051dbd7-445c-4bf7-9b33-2f9641a60c01 6522633 0 2020-04-08 22:08:50 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 99bc5094-c9f4-45cd-a511-0a87d57b7004 0xc005dedde7 0xc005dedde8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-m9qdg,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-m9qdg,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-m9qdg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 22:08:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 8 22:08:50.947: INFO: Pod "webserver-deployment-c7997dcc8-k6s55" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-k6s55 webserver-deployment-c7997dcc8- deployment-4572 /api/v1/namespaces/deployment-4572/pods/webserver-deployment-c7997dcc8-k6s55 48127f19-f107-4066-9cff-fdc97c946a20 6522573 0 2020-04-08 22:08:48 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 99bc5094-c9f4-45cd-a511-0a87d57b7004 0xc005dedf47 0xc005dedf48}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-m9qdg,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-m9qdg,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-m9qdg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 22:08:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 22:08:48 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 22:08:48 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 22:08:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-04-08 22:08:48 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 8 22:08:50.947: INFO: Pod "webserver-deployment-c7997dcc8-kgdgm" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-kgdgm webserver-deployment-c7997dcc8- deployment-4572 /api/v1/namespaces/deployment-4572/pods/webserver-deployment-c7997dcc8-kgdgm 58fe6473-f014-467a-a4a2-b25064eb55d1 6522558 0 2020-04-08 22:08:48 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 99bc5094-c9f4-45cd-a511-0a87d57b7004 0xc000832257 0xc000832258}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-m9qdg,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-m9qdg,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-m9qdg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 22:08:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 22:08:48 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 22:08:48 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 22:08:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-04-08 22:08:48 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 8 22:08:50.947: INFO: Pod "webserver-deployment-c7997dcc8-ndffk" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-ndffk webserver-deployment-c7997dcc8- deployment-4572 /api/v1/namespaces/deployment-4572/pods/webserver-deployment-c7997dcc8-ndffk 24d6833c-af60-4346-adc6-0b9e73fef247 6522635 0 2020-04-08 22:08:50 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 99bc5094-c9f4-45cd-a511-0a87d57b7004 0xc000832887 0xc000832888}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-m9qdg,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-m9qdg,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-m9qdg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 22:08:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 8 22:08:50.947: INFO: Pod "webserver-deployment-c7997dcc8-ptlsj" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-ptlsj webserver-deployment-c7997dcc8- deployment-4572 /api/v1/namespaces/deployment-4572/pods/webserver-deployment-c7997dcc8-ptlsj 4c4b73f7-e8f0-481b-a1de-73162a9786c4 6522645 0 2020-04-08 22:08:50 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 99bc5094-c9f4-45cd-a511-0a87d57b7004 0xc000832b67 0xc000832b68}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-m9qdg,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-m9qdg,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-m9qdg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 22:08:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 8 22:08:50.947: INFO: Pod "webserver-deployment-c7997dcc8-rrmbb" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-rrmbb webserver-deployment-c7997dcc8- deployment-4572 /api/v1/namespaces/deployment-4572/pods/webserver-deployment-c7997dcc8-rrmbb 85ba2ed3-ad9e-4e47-aecd-ead65da346f9 6522548 0 2020-04-08 22:08:48 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 99bc5094-c9f4-45cd-a511-0a87d57b7004 0xc000832dc7 0xc000832dc8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-m9qdg,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-m9qdg,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-m9qdg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 22:08:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 22:08:48 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 22:08:48 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 22:08:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-04-08 22:08:48 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 8 22:08:50.948: INFO: Pod "webserver-deployment-c7997dcc8-sb9j9" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-sb9j9 webserver-deployment-c7997dcc8- deployment-4572 /api/v1/namespaces/deployment-4572/pods/webserver-deployment-c7997dcc8-sb9j9 f357c0fb-babf-4e7f-8414-28086ceabd28 6522636 0 2020-04-08 22:08:50 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 99bc5094-c9f4-45cd-a511-0a87d57b7004 0xc000833077 0xc000833078}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-m9qdg,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-m9qdg,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-m9qdg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 22:08:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 8 22:08:50.948: INFO: Pod "webserver-deployment-c7997dcc8-w5274" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-w5274 webserver-deployment-c7997dcc8- deployment-4572 /api/v1/namespaces/deployment-4572/pods/webserver-deployment-c7997dcc8-w5274 b001ac88-283f-40c1-8017-b921388dfb21 6522575 0 2020-04-08 22:08:48 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 99bc5094-c9f4-45cd-a511-0a87d57b7004 0xc000833287 0xc000833288}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-m9qdg,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-m9qdg,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-m9qdg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 22:08:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 22:08:48 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 22:08:48 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 22:08:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-04-08 22:08:48 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 8 22:08:50.948: INFO: Pod "webserver-deployment-c7997dcc8-xdd6j" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-xdd6j webserver-deployment-c7997dcc8- deployment-4572 /api/v1/namespaces/deployment-4572/pods/webserver-deployment-c7997dcc8-xdd6j 8c0e50d4-7fb3-4d40-9982-564910116280 6522622 0 2020-04-08 22:08:50 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 99bc5094-c9f4-45cd-a511-0a87d57b7004 0xc0008335b7 0xc0008335b8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-m9qdg,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-m9qdg,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-m9qdg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 22:08:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 22:08:50.948: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-4572" for this suite. • [SLOW TEST:12.984 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":278,"completed":219,"skipped":3731,"failed":0} S ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 22:08:51.213: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 22:09:05.636: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-8859" for this suite. • [SLOW TEST:14.506 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when scheduling a busybox Pod with hostAliases /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136 should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":220,"skipped":3732,"failed":0} SSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 22:09:05.720: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name s-test-opt-del-3f779698-b737-43ef-adde-da6c8badc033 STEP: Creating secret with name s-test-opt-upd-1b301b8a-7228-40e2-9cde-413b29869684 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-3f779698-b737-43ef-adde-da6c8badc033 STEP: Updating secret s-test-opt-upd-1b301b8a-7228-40e2-9cde-413b29869684 STEP: Creating secret with name s-test-opt-create-6460f016-d245-4889-9cb6-213db4b1d051 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 22:09:17.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4764" for this suite. • [SLOW TEST:11.722 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":221,"skipped":3735,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 22:09:17.442: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with configMap that has name projected-configmap-test-upd-4c3384b4-ab69-4b71-b4e2-2671b36d8150 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-4c3384b4-ab69-4b71-b4e2-2671b36d8150 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 22:09:23.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6123" for this suite. • [SLOW TEST:6.141 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":222,"skipped":3768,"failed":0} [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 22:09:23.583: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-map-22a1e93e-0b2f-4393-ab12-628821245817 STEP: Creating a pod to test consume secrets Apr 8 22:09:23.666: INFO: Waiting up to 5m0s for pod "pod-secrets-ece488e9-ff63-44c7-8d24-18c46a88bfc6" in namespace "secrets-4480" to be "success or failure" Apr 8 22:09:23.669: INFO: Pod "pod-secrets-ece488e9-ff63-44c7-8d24-18c46a88bfc6": Phase="Pending", Reason="", readiness=false. Elapsed: 3.355856ms Apr 8 22:09:25.716: INFO: Pod "pod-secrets-ece488e9-ff63-44c7-8d24-18c46a88bfc6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049997848s Apr 8 22:09:27.720: INFO: Pod "pod-secrets-ece488e9-ff63-44c7-8d24-18c46a88bfc6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.054028806s STEP: Saw pod success Apr 8 22:09:27.720: INFO: Pod "pod-secrets-ece488e9-ff63-44c7-8d24-18c46a88bfc6" satisfied condition "success or failure" Apr 8 22:09:27.722: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-ece488e9-ff63-44c7-8d24-18c46a88bfc6 container secret-volume-test: STEP: delete the pod Apr 8 22:09:27.853: INFO: Waiting for pod pod-secrets-ece488e9-ff63-44c7-8d24-18c46a88bfc6 to disappear Apr 8 22:09:27.865: INFO: Pod pod-secrets-ece488e9-ff63-44c7-8d24-18c46a88bfc6 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 22:09:27.866: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4480" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":223,"skipped":3768,"failed":0} S ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 22:09:27.879: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating pod Apr 8 22:09:32.176: INFO: Pod pod-hostip-6372350a-b7a0-4ffe-acad-0cb8df0cd2e0 has hostIP: 172.17.0.10 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 22:09:32.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1818" for this suite. •{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":278,"completed":224,"skipped":3769,"failed":0} SSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 22:09:32.185: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1681 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Apr 8 22:09:32.259: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-5900' Apr 8 22:09:32.371: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Apr 8 22:09:32.371: INFO: stdout: "job.batch/e2e-test-httpd-job created\n" STEP: verifying the job e2e-test-httpd-job was created [AfterEach] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1686 Apr 8 22:09:32.384: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-httpd-job --namespace=kubectl-5900' Apr 8 22:09:32.490: INFO: stderr: "" Apr 8 22:09:32.490: INFO: stdout: "job.batch \"e2e-test-httpd-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 22:09:32.490: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5900" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure [Conformance]","total":278,"completed":225,"skipped":3774,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 22:09:32.497: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 8 22:09:32.924: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 8 22:09:34.931: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721980572, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721980572, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721980572, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721980572, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 8 22:09:38.001: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 22:09:38.474: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3225" for this suite. STEP: Destroying namespace "webhook-3225-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.144 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":278,"completed":226,"skipped":3801,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 22:09:38.642: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 8 22:09:38.752: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties Apr 8 22:09:41.707: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7608 create -f -' Apr 8 22:09:45.129: INFO: stderr: "" Apr 8 22:09:45.129: INFO: stdout: "e2e-test-crd-publish-openapi-3611-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Apr 8 22:09:45.129: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7608 delete e2e-test-crd-publish-openapi-3611-crds test-foo' Apr 8 22:09:45.250: INFO: stderr: "" Apr 8 22:09:45.250: INFO: stdout: "e2e-test-crd-publish-openapi-3611-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" Apr 8 22:09:45.250: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7608 apply -f -' Apr 8 22:09:45.514: INFO: stderr: "" Apr 8 22:09:45.514: INFO: stdout: "e2e-test-crd-publish-openapi-3611-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Apr 8 22:09:45.514: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7608 delete e2e-test-crd-publish-openapi-3611-crds test-foo' Apr 8 22:09:45.634: INFO: stderr: "" Apr 8 22:09:45.635: INFO: stdout: "e2e-test-crd-publish-openapi-3611-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema Apr 8 22:09:45.635: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7608 create -f -' Apr 8 22:09:45.876: INFO: rc: 1 Apr 8 22:09:45.877: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7608 apply -f -' Apr 8 22:09:46.158: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties Apr 8 22:09:46.158: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7608 create -f -' Apr 8 22:09:46.381: INFO: rc: 1 Apr 8 22:09:46.381: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7608 apply -f -' Apr 8 22:09:46.629: INFO: rc: 1 STEP: kubectl explain works to explain CR properties Apr 8 22:09:46.629: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-3611-crds' Apr 8 22:09:46.882: INFO: stderr: "" Apr 8 22:09:46.882: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-3611-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively Apr 8 22:09:46.883: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-3611-crds.metadata' Apr 8 22:09:47.107: INFO: stderr: "" Apr 8 22:09:47.107: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-3611-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC. Populated by the system.\n Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested. Populated by the system when a graceful deletion is\n requested. Read-only. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server. If this field is specified and the generated name exists, the\n server will NOT return a 409 - instead, it will either return 201 Created\n or 500 with Reason ServerTimeout indicating a unique name could not be\n found in the time allotted, and the client should retry (optionally after\n the time indicated in the Retry-After header). Applied only if Name is not\n specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty. Must\n be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources. Populated by the system.\n Read-only. Value must be treated as opaque by clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n release and the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations. Populated by the system. Read-only.\n More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" Apr 8 22:09:47.108: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-3611-crds.spec' Apr 8 22:09:47.375: INFO: stderr: "" Apr 8 22:09:47.375: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-3611-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" Apr 8 22:09:47.375: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-3611-crds.spec.bars' Apr 8 22:09:47.620: INFO: stderr: "" Apr 8 22:09:47.620: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-3611-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist Apr 8 22:09:47.620: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-3611-crds.spec.bars2' Apr 8 22:09:47.861: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 22:09:50.739: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-7608" for this suite. • [SLOW TEST:12.102 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":278,"completed":227,"skipped":3838,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 22:09:50.744: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 8 22:09:50.820: INFO: (0) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 27.034279ms) Apr 8 22:09:50.824: INFO: (1) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 4.074041ms) Apr 8 22:09:50.828: INFO: (2) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.883984ms) Apr 8 22:09:50.831: INFO: (3) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.057467ms) Apr 8 22:09:50.834: INFO: (4) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.098307ms) Apr 8 22:09:50.838: INFO: (5) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.146727ms) Apr 8 22:09:50.841: INFO: (6) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.305856ms) Apr 8 22:09:50.844: INFO: (7) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.251151ms) Apr 8 22:09:50.848: INFO: (8) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.470761ms) Apr 8 22:09:50.851: INFO: (9) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.515409ms) Apr 8 22:09:50.855: INFO: (10) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.390481ms) Apr 8 22:09:50.858: INFO: (11) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.404825ms) Apr 8 22:09:50.861: INFO: (12) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.182627ms) Apr 8 22:09:50.865: INFO: (13) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.215427ms) Apr 8 22:09:50.868: INFO: (14) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.758383ms) Apr 8 22:09:50.872: INFO: (15) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.721196ms) Apr 8 22:09:50.876: INFO: (16) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.671261ms) Apr 8 22:09:50.880: INFO: (17) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.909598ms) Apr 8 22:09:50.884: INFO: (18) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.894824ms) Apr 8 22:09:50.888: INFO: (19) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.786533ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 22:09:50.888: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-1499" for this suite. •{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance]","total":278,"completed":228,"skipped":3858,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 22:09:50.896: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name secret-emptykey-test-bada6375-c368-48be-986a-ead797966974 [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 22:09:50.951: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6541" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":278,"completed":229,"skipped":3871,"failed":0} SSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 22:09:50.965: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Apr 8 22:09:51.033: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 8 22:09:51.075: INFO: Number of nodes with available pods: 0 Apr 8 22:09:51.075: INFO: Node jerma-worker is running more than one daemon pod Apr 8 22:09:52.078: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 8 22:09:52.081: INFO: Number of nodes with available pods: 0 Apr 8 22:09:52.081: INFO: Node jerma-worker is running more than one daemon pod Apr 8 22:09:53.079: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 8 22:09:53.081: INFO: Number of nodes with available pods: 0 Apr 8 22:09:53.081: INFO: Node jerma-worker is running more than one daemon pod Apr 8 22:09:54.082: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 8 22:09:54.085: INFO: Number of nodes with available pods: 0 Apr 8 22:09:54.085: INFO: Node jerma-worker is running more than one daemon pod Apr 8 22:09:55.094: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 8 22:09:55.117: INFO: Number of nodes with available pods: 2 Apr 8 22:09:55.117: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Apr 8 22:09:55.134: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 8 22:09:55.137: INFO: Number of nodes with available pods: 1 Apr 8 22:09:55.137: INFO: Node jerma-worker2 is running more than one daemon pod Apr 8 22:09:56.169: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 8 22:09:56.193: INFO: Number of nodes with available pods: 1 Apr 8 22:09:56.193: INFO: Node jerma-worker2 is running more than one daemon pod Apr 8 22:09:57.142: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 8 22:09:57.145: INFO: Number of nodes with available pods: 1 Apr 8 22:09:57.146: INFO: Node jerma-worker2 is running more than one daemon pod Apr 8 22:09:58.142: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 8 22:09:58.146: INFO: Number of nodes with available pods: 1 Apr 8 22:09:58.146: INFO: Node jerma-worker2 is running more than one daemon pod Apr 8 22:09:59.141: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 8 22:09:59.145: INFO: Number of nodes with available pods: 1 Apr 8 22:09:59.145: INFO: Node jerma-worker2 is running more than one daemon pod Apr 8 22:10:00.142: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 8 22:10:00.146: INFO: Number of nodes with available pods: 1 Apr 8 22:10:00.146: INFO: Node jerma-worker2 is running more than one daemon pod Apr 8 22:10:01.140: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 8 22:10:01.143: INFO: Number of nodes with available pods: 1 Apr 8 22:10:01.143: INFO: Node jerma-worker2 is running more than one daemon pod Apr 8 22:10:02.142: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 8 22:10:02.149: INFO: Number of nodes with available pods: 1 Apr 8 22:10:02.149: INFO: Node jerma-worker2 is running more than one daemon pod Apr 8 22:10:03.142: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 8 22:10:03.146: INFO: Number of nodes with available pods: 1 Apr 8 22:10:03.146: INFO: Node jerma-worker2 is running more than one daemon pod Apr 8 22:10:04.142: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 8 22:10:04.146: INFO: Number of nodes with available pods: 1 Apr 8 22:10:04.146: INFO: Node jerma-worker2 is running more than one daemon pod Apr 8 22:10:05.142: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 8 22:10:05.146: INFO: Number of nodes with available pods: 1 Apr 8 22:10:05.146: INFO: Node jerma-worker2 is running more than one daemon pod Apr 8 22:10:06.142: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 8 22:10:06.146: INFO: Number of nodes with available pods: 1 Apr 8 22:10:06.146: INFO: Node jerma-worker2 is running more than one daemon pod Apr 8 22:10:07.142: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 8 22:10:07.146: INFO: Number of nodes with available pods: 1 Apr 8 22:10:07.146: INFO: Node jerma-worker2 is running more than one daemon pod Apr 8 22:10:08.143: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 8 22:10:08.146: INFO: Number of nodes with available pods: 1 Apr 8 22:10:08.146: INFO: Node jerma-worker2 is running more than one daemon pod Apr 8 22:10:09.142: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 8 22:10:09.146: INFO: Number of nodes with available pods: 1 Apr 8 22:10:09.146: INFO: Node jerma-worker2 is running more than one daemon pod Apr 8 22:10:10.142: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 8 22:10:10.145: INFO: Number of nodes with available pods: 1 Apr 8 22:10:10.145: INFO: Node jerma-worker2 is running more than one daemon pod Apr 8 22:10:11.141: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 8 22:10:11.143: INFO: Number of nodes with available pods: 1 Apr 8 22:10:11.143: INFO: Node jerma-worker2 is running more than one daemon pod Apr 8 22:10:12.141: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 8 22:10:12.144: INFO: Number of nodes with available pods: 1 Apr 8 22:10:12.144: INFO: Node jerma-worker2 is running more than one daemon pod Apr 8 22:10:13.142: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 8 22:10:13.146: INFO: Number of nodes with available pods: 2 Apr 8 22:10:13.146: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2462, will wait for the garbage collector to delete the pods Apr 8 22:10:13.209: INFO: Deleting DaemonSet.extensions daemon-set took: 7.478256ms Apr 8 22:10:13.509: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.262307ms Apr 8 22:10:19.513: INFO: Number of nodes with available pods: 0 Apr 8 22:10:19.513: INFO: Number of running nodes: 0, number of available pods: 0 Apr 8 22:10:19.515: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-2462/daemonsets","resourceVersion":"6523562"},"items":null} Apr 8 22:10:19.517: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-2462/pods","resourceVersion":"6523562"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 22:10:19.526: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-2462" for this suite. • [SLOW TEST:28.569 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":278,"completed":230,"skipped":3879,"failed":0} S ------------------------------ [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 22:10:19.533: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 8 22:10:19.629: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-c7b9f285-46f5-45ff-9d00-a55f6fb17a11" in namespace "security-context-test-8047" to be "success or failure" Apr 8 22:10:19.641: INFO: Pod "alpine-nnp-false-c7b9f285-46f5-45ff-9d00-a55f6fb17a11": Phase="Pending", Reason="", readiness=false. Elapsed: 11.651903ms Apr 8 22:10:21.645: INFO: Pod "alpine-nnp-false-c7b9f285-46f5-45ff-9d00-a55f6fb17a11": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015688344s Apr 8 22:10:23.650: INFO: Pod "alpine-nnp-false-c7b9f285-46f5-45ff-9d00-a55f6fb17a11": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020201515s Apr 8 22:10:23.650: INFO: Pod "alpine-nnp-false-c7b9f285-46f5-45ff-9d00-a55f6fb17a11" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 22:10:23.658: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-8047" for this suite. •{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":231,"skipped":3880,"failed":0} SSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 22:10:23.665: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:324 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the initial replication controller Apr 8 22:10:23.730: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6458' Apr 8 22:10:24.047: INFO: stderr: "" Apr 8 22:10:24.047: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 8 22:10:24.047: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6458' Apr 8 22:10:24.225: INFO: stderr: "" Apr 8 22:10:24.225: INFO: stdout: "update-demo-nautilus-9bdm5 update-demo-nautilus-t8tl5 " Apr 8 22:10:24.225: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9bdm5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6458' Apr 8 22:10:24.339: INFO: stderr: "" Apr 8 22:10:24.339: INFO: stdout: "" Apr 8 22:10:24.339: INFO: update-demo-nautilus-9bdm5 is created but not running Apr 8 22:10:29.340: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6458' Apr 8 22:10:29.434: INFO: stderr: "" Apr 8 22:10:29.434: INFO: stdout: "update-demo-nautilus-9bdm5 update-demo-nautilus-t8tl5 " Apr 8 22:10:29.434: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9bdm5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6458' Apr 8 22:10:29.528: INFO: stderr: "" Apr 8 22:10:29.528: INFO: stdout: "true" Apr 8 22:10:29.528: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9bdm5 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6458' Apr 8 22:10:29.623: INFO: stderr: "" Apr 8 22:10:29.623: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 8 22:10:29.623: INFO: validating pod update-demo-nautilus-9bdm5 Apr 8 22:10:29.626: INFO: got data: { "image": "nautilus.jpg" } Apr 8 22:10:29.626: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 8 22:10:29.626: INFO: update-demo-nautilus-9bdm5 is verified up and running Apr 8 22:10:29.626: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-t8tl5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6458' Apr 8 22:10:29.715: INFO: stderr: "" Apr 8 22:10:29.715: INFO: stdout: "true" Apr 8 22:10:29.715: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-t8tl5 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6458' Apr 8 22:10:29.804: INFO: stderr: "" Apr 8 22:10:29.804: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 8 22:10:29.804: INFO: validating pod update-demo-nautilus-t8tl5 Apr 8 22:10:29.809: INFO: got data: { "image": "nautilus.jpg" } Apr 8 22:10:29.809: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 8 22:10:29.809: INFO: update-demo-nautilus-t8tl5 is verified up and running STEP: rolling-update to new replication controller Apr 8 22:10:29.811: INFO: scanned /root for discovery docs: Apr 8 22:10:29.811: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-6458' Apr 8 22:10:52.300: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Apr 8 22:10:52.300: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 8 22:10:52.300: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6458' Apr 8 22:10:52.411: INFO: stderr: "" Apr 8 22:10:52.411: INFO: stdout: "update-demo-kitten-9vtxl update-demo-kitten-blv6k " Apr 8 22:10:52.411: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-9vtxl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6458' Apr 8 22:10:52.497: INFO: stderr: "" Apr 8 22:10:52.497: INFO: stdout: "true" Apr 8 22:10:52.497: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-9vtxl -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6458' Apr 8 22:10:52.586: INFO: stderr: "" Apr 8 22:10:52.586: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Apr 8 22:10:52.586: INFO: validating pod update-demo-kitten-9vtxl Apr 8 22:10:52.591: INFO: got data: { "image": "kitten.jpg" } Apr 8 22:10:52.591: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Apr 8 22:10:52.591: INFO: update-demo-kitten-9vtxl is verified up and running Apr 8 22:10:52.591: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-blv6k -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6458' Apr 8 22:10:52.687: INFO: stderr: "" Apr 8 22:10:52.687: INFO: stdout: "true" Apr 8 22:10:52.687: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-blv6k -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6458' Apr 8 22:10:52.787: INFO: stderr: "" Apr 8 22:10:52.787: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Apr 8 22:10:52.787: INFO: validating pod update-demo-kitten-blv6k Apr 8 22:10:52.791: INFO: got data: { "image": "kitten.jpg" } Apr 8 22:10:52.791: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Apr 8 22:10:52.791: INFO: update-demo-kitten-blv6k is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 22:10:52.791: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6458" for this suite. • [SLOW TEST:29.133 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:322 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller [Conformance]","total":278,"completed":232,"skipped":3889,"failed":0} SSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 22:10:52.799: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 22:10:56.954: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-6545" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":233,"skipped":3903,"failed":0} SSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 22:10:56.962: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 8 22:10:57.053: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9f6a1161-ef89-4421-a455-cd0301f2b9c6" in namespace "projected-4662" to be "success or failure" Apr 8 22:10:57.056: INFO: Pod "downwardapi-volume-9f6a1161-ef89-4421-a455-cd0301f2b9c6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.814931ms Apr 8 22:10:59.286: INFO: Pod "downwardapi-volume-9f6a1161-ef89-4421-a455-cd0301f2b9c6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.232723296s Apr 8 22:11:01.290: INFO: Pod "downwardapi-volume-9f6a1161-ef89-4421-a455-cd0301f2b9c6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.236986943s STEP: Saw pod success Apr 8 22:11:01.290: INFO: Pod "downwardapi-volume-9f6a1161-ef89-4421-a455-cd0301f2b9c6" satisfied condition "success or failure" Apr 8 22:11:01.293: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-9f6a1161-ef89-4421-a455-cd0301f2b9c6 container client-container: STEP: delete the pod Apr 8 22:11:01.331: INFO: Waiting for pod downwardapi-volume-9f6a1161-ef89-4421-a455-cd0301f2b9c6 to disappear Apr 8 22:11:01.338: INFO: Pod downwardapi-volume-9f6a1161-ef89-4421-a455-cd0301f2b9c6 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 22:11:01.338: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4662" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":234,"skipped":3907,"failed":0} SS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 22:11:01.346: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-7327.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-7327.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7327.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-7327.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-7327.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7327.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 8 22:11:07.435: INFO: DNS probes using dns-7327/dns-test-7d2cd52a-145a-4fe2-9583-af41bd16fb34 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 22:11:07.463: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-7327" for this suite. • [SLOW TEST:6.142 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":278,"completed":235,"skipped":3909,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 22:11:07.488: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 8 22:11:07.833: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Apr 8 22:11:07.842: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 8 22:11:07.847: INFO: Number of nodes with available pods: 0 Apr 8 22:11:07.847: INFO: Node jerma-worker is running more than one daemon pod Apr 8 22:11:08.852: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 8 22:11:08.856: INFO: Number of nodes with available pods: 0 Apr 8 22:11:08.856: INFO: Node jerma-worker is running more than one daemon pod Apr 8 22:11:09.867: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 8 22:11:09.870: INFO: Number of nodes with available pods: 0 Apr 8 22:11:09.870: INFO: Node jerma-worker is running more than one daemon pod Apr 8 22:11:10.852: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 8 22:11:10.855: INFO: Number of nodes with available pods: 1 Apr 8 22:11:10.856: INFO: Node jerma-worker2 is running more than one daemon pod Apr 8 22:11:11.852: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 8 22:11:11.855: INFO: Number of nodes with available pods: 1 Apr 8 22:11:11.855: INFO: Node jerma-worker2 is running more than one daemon pod Apr 8 22:11:12.874: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 8 22:11:12.877: INFO: Number of nodes with available pods: 2 Apr 8 22:11:12.877: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Apr 8 22:11:12.999: INFO: Wrong image for pod: daemon-set-8cf99. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 8 22:11:12.999: INFO: Wrong image for pod: daemon-set-t7fgh. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 8 22:11:13.033: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 8 22:11:14.257: INFO: Wrong image for pod: daemon-set-8cf99. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 8 22:11:14.257: INFO: Wrong image for pod: daemon-set-t7fgh. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 8 22:11:14.320: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 8 22:11:15.037: INFO: Wrong image for pod: daemon-set-8cf99. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 8 22:11:15.037: INFO: Wrong image for pod: daemon-set-t7fgh. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 8 22:11:15.037: INFO: Pod daemon-set-t7fgh is not available Apr 8 22:11:15.040: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 8 22:11:16.038: INFO: Wrong image for pod: daemon-set-8cf99. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 8 22:11:16.038: INFO: Pod daemon-set-t5657 is not available Apr 8 22:11:16.042: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 8 22:11:17.066: INFO: Wrong image for pod: daemon-set-8cf99. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 8 22:11:17.066: INFO: Pod daemon-set-t5657 is not available Apr 8 22:11:17.069: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 8 22:11:18.038: INFO: Wrong image for pod: daemon-set-8cf99. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 8 22:11:18.038: INFO: Pod daemon-set-t5657 is not available Apr 8 22:11:18.042: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 8 22:11:19.155: INFO: Wrong image for pod: daemon-set-8cf99. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 8 22:11:19.185: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 8 22:11:20.089: INFO: Wrong image for pod: daemon-set-8cf99. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 8 22:11:20.089: INFO: Pod daemon-set-8cf99 is not available Apr 8 22:11:20.093: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 8 22:11:21.038: INFO: Wrong image for pod: daemon-set-8cf99. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 8 22:11:21.038: INFO: Pod daemon-set-8cf99 is not available Apr 8 22:11:21.041: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 8 22:11:22.038: INFO: Wrong image for pod: daemon-set-8cf99. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 8 22:11:22.038: INFO: Pod daemon-set-8cf99 is not available Apr 8 22:11:22.042: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 8 22:11:23.038: INFO: Wrong image for pod: daemon-set-8cf99. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 8 22:11:23.038: INFO: Pod daemon-set-8cf99 is not available Apr 8 22:11:23.041: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 8 22:11:24.038: INFO: Wrong image for pod: daemon-set-8cf99. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 8 22:11:24.038: INFO: Pod daemon-set-8cf99 is not available Apr 8 22:11:24.042: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 8 22:11:25.038: INFO: Wrong image for pod: daemon-set-8cf99. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 8 22:11:25.038: INFO: Pod daemon-set-8cf99 is not available Apr 8 22:11:25.042: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 8 22:11:26.038: INFO: Wrong image for pod: daemon-set-8cf99. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 8 22:11:26.038: INFO: Pod daemon-set-8cf99 is not available Apr 8 22:11:26.042: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 8 22:11:27.038: INFO: Wrong image for pod: daemon-set-8cf99. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 8 22:11:27.038: INFO: Pod daemon-set-8cf99 is not available Apr 8 22:11:27.042: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 8 22:11:28.038: INFO: Wrong image for pod: daemon-set-8cf99. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 8 22:11:28.038: INFO: Pod daemon-set-8cf99 is not available Apr 8 22:11:28.042: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 8 22:11:29.038: INFO: Wrong image for pod: daemon-set-8cf99. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 8 22:11:29.038: INFO: Pod daemon-set-8cf99 is not available Apr 8 22:11:29.041: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 8 22:11:30.038: INFO: Pod daemon-set-nfrj7 is not available Apr 8 22:11:30.042: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Apr 8 22:11:30.047: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 8 22:11:30.050: INFO: Number of nodes with available pods: 1 Apr 8 22:11:30.050: INFO: Node jerma-worker2 is running more than one daemon pod Apr 8 22:11:31.055: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 8 22:11:31.059: INFO: Number of nodes with available pods: 1 Apr 8 22:11:31.059: INFO: Node jerma-worker2 is running more than one daemon pod Apr 8 22:11:32.065: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 8 22:11:32.068: INFO: Number of nodes with available pods: 2 Apr 8 22:11:32.068: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4593, will wait for the garbage collector to delete the pods Apr 8 22:11:32.140: INFO: Deleting DaemonSet.extensions daemon-set took: 6.465379ms Apr 8 22:11:32.540: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.267528ms Apr 8 22:11:39.344: INFO: Number of nodes with available pods: 0 Apr 8 22:11:39.344: INFO: Number of running nodes: 0, number of available pods: 0 Apr 8 22:11:39.347: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4593/daemonsets","resourceVersion":"6524142"},"items":null} Apr 8 22:11:39.349: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4593/pods","resourceVersion":"6524142"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 22:11:39.358: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-4593" for this suite. • [SLOW TEST:31.876 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":278,"completed":236,"skipped":3940,"failed":0} [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 22:11:39.365: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Apr 8 22:11:39.449: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 8 22:11:39.457: INFO: Waiting for terminating namespaces to be deleted... Apr 8 22:11:39.460: INFO: Logging pods the kubelet thinks is on node jerma-worker before test Apr 8 22:11:39.464: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 8 22:11:39.464: INFO: Container kindnet-cni ready: true, restart count 0 Apr 8 22:11:39.464: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 8 22:11:39.464: INFO: Container kube-proxy ready: true, restart count 0 Apr 8 22:11:39.464: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test Apr 8 22:11:39.469: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) Apr 8 22:11:39.469: INFO: Container kube-hunter ready: false, restart count 0 Apr 8 22:11:39.469: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 8 22:11:39.469: INFO: Container kindnet-cni ready: true, restart count 0 Apr 8 22:11:39.469: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) Apr 8 22:11:39.469: INFO: Container kube-bench ready: false, restart count 0 Apr 8 22:11:39.469: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 8 22:11:39.469: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-07726a3c-1a5f-4243-a798-150451df3269 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-07726a3c-1a5f-4243-a798-150451df3269 off the node jerma-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-07726a3c-1a5f-4243-a798-150451df3269 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 22:11:47.669: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-1792" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:8.312 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":278,"completed":237,"skipped":3940,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 22:11:47.676: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0408 22:11:57.778926 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 8 22:11:57.778: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 22:11:57.779: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7962" for this suite. • [SLOW TEST:10.109 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":278,"completed":238,"skipped":3952,"failed":0} SSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 22:11:57.785: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 22:12:14.041: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7636" for this suite. • [SLOW TEST:16.264 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":278,"completed":239,"skipped":3956,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 22:12:14.050: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 8 22:12:14.682: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 8 22:12:16.693: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721980734, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721980734, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721980734, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721980734, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 8 22:12:19.725: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook Apr 8 22:12:23.775: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config attach --namespace=webhook-4938 to-be-attached-pod -i -c=container1' Apr 8 22:12:23.899: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 22:12:23.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4938" for this suite. STEP: Destroying namespace "webhook-4938-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.945 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":278,"completed":240,"skipped":3963,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 22:12:23.996: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 8 22:12:24.047: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 22:12:25.096: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-1649" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":278,"completed":241,"skipped":3985,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 22:12:25.106: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation Apr 8 22:12:25.169: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation Apr 8 22:12:36.624: INFO: >>> kubeConfig: /root/.kube/config Apr 8 22:12:38.529: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 22:12:49.111: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-3956" for this suite. • [SLOW TEST:24.012 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":278,"completed":242,"skipped":3998,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 22:12:49.118: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 8 22:12:49.165: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 22:12:49.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-7528" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":278,"completed":243,"skipped":4018,"failed":0} SSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 22:12:49.801: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: getting the auto-created API token STEP: reading a file in the container Apr 8 22:12:54.384: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-3637 pod-service-account-ca2cf962-3cfc-4fbf-9c0b-0d439ea33a29 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Apr 8 22:12:54.641: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-3637 pod-service-account-ca2cf962-3cfc-4fbf-9c0b-0d439ea33a29 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Apr 8 22:12:54.875: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-3637 pod-service-account-ca2cf962-3cfc-4fbf-9c0b-0d439ea33a29 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 22:12:55.101: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-3637" for this suite. • [SLOW TEST:5.307 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":278,"completed":244,"skipped":4025,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 22:12:55.109: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Apr 8 22:12:55.161: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 22:13:01.000: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-8748" for this suite. • [SLOW TEST:5.944 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":278,"completed":245,"skipped":4073,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 22:13:01.054: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on tmpfs Apr 8 22:13:01.104: INFO: Waiting up to 5m0s for pod "pod-186b87b2-4611-4be4-ad7b-8aa7f6346339" in namespace "emptydir-4491" to be "success or failure" Apr 8 22:13:01.108: INFO: Pod "pod-186b87b2-4611-4be4-ad7b-8aa7f6346339": Phase="Pending", Reason="", readiness=false. Elapsed: 3.641719ms Apr 8 22:13:03.111: INFO: Pod "pod-186b87b2-4611-4be4-ad7b-8aa7f6346339": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007591973s Apr 8 22:13:05.116: INFO: Pod "pod-186b87b2-4611-4be4-ad7b-8aa7f6346339": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011693094s STEP: Saw pod success Apr 8 22:13:05.116: INFO: Pod "pod-186b87b2-4611-4be4-ad7b-8aa7f6346339" satisfied condition "success or failure" Apr 8 22:13:05.119: INFO: Trying to get logs from node jerma-worker pod pod-186b87b2-4611-4be4-ad7b-8aa7f6346339 container test-container: STEP: delete the pod Apr 8 22:13:05.157: INFO: Waiting for pod pod-186b87b2-4611-4be4-ad7b-8aa7f6346339 to disappear Apr 8 22:13:05.174: INFO: Pod pod-186b87b2-4611-4be4-ad7b-8aa7f6346339 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 22:13:05.174: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4491" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":246,"skipped":4082,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 22:13:05.182: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-map-99c293a2-2dd0-45b1-a26a-ee88b9c5f2a4 STEP: Creating a pod to test consume secrets Apr 8 22:13:05.262: INFO: Waiting up to 5m0s for pod "pod-secrets-70b2b0cb-d841-48cf-aa69-f6934e2ddcc0" in namespace "secrets-9859" to be "success or failure" Apr 8 22:13:05.275: INFO: Pod "pod-secrets-70b2b0cb-d841-48cf-aa69-f6934e2ddcc0": Phase="Pending", Reason="", readiness=false. Elapsed: 13.677782ms Apr 8 22:13:07.280: INFO: Pod "pod-secrets-70b2b0cb-d841-48cf-aa69-f6934e2ddcc0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018203076s Apr 8 22:13:09.284: INFO: Pod "pod-secrets-70b2b0cb-d841-48cf-aa69-f6934e2ddcc0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022189418s STEP: Saw pod success Apr 8 22:13:09.284: INFO: Pod "pod-secrets-70b2b0cb-d841-48cf-aa69-f6934e2ddcc0" satisfied condition "success or failure" Apr 8 22:13:09.287: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-70b2b0cb-d841-48cf-aa69-f6934e2ddcc0 container secret-volume-test: STEP: delete the pod Apr 8 22:13:09.307: INFO: Waiting for pod pod-secrets-70b2b0cb-d841-48cf-aa69-f6934e2ddcc0 to disappear Apr 8 22:13:09.311: INFO: Pod pod-secrets-70b2b0cb-d841-48cf-aa69-f6934e2ddcc0 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 22:13:09.311: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9859" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":247,"skipped":4105,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 22:13:09.319: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-69d21ba3-7917-463c-9c51-3c06daf0df45 STEP: Creating a pod to test consume secrets Apr 8 22:13:09.412: INFO: Waiting up to 5m0s for pod "pod-secrets-662d5b22-e175-4fa7-8f54-1f6fb8f69572" in namespace "secrets-5547" to be "success or failure" Apr 8 22:13:09.419: INFO: Pod "pod-secrets-662d5b22-e175-4fa7-8f54-1f6fb8f69572": Phase="Pending", Reason="", readiness=false. Elapsed: 7.346398ms Apr 8 22:13:11.423: INFO: Pod "pod-secrets-662d5b22-e175-4fa7-8f54-1f6fb8f69572": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011279641s Apr 8 22:13:13.428: INFO: Pod "pod-secrets-662d5b22-e175-4fa7-8f54-1f6fb8f69572": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01574796s STEP: Saw pod success Apr 8 22:13:13.428: INFO: Pod "pod-secrets-662d5b22-e175-4fa7-8f54-1f6fb8f69572" satisfied condition "success or failure" Apr 8 22:13:13.431: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-662d5b22-e175-4fa7-8f54-1f6fb8f69572 container secret-volume-test: STEP: delete the pod Apr 8 22:13:13.463: INFO: Waiting for pod pod-secrets-662d5b22-e175-4fa7-8f54-1f6fb8f69572 to disappear Apr 8 22:13:13.473: INFO: Pod pod-secrets-662d5b22-e175-4fa7-8f54-1f6fb8f69572 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 22:13:13.473: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5547" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":248,"skipped":4135,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 22:13:13.481: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-configmap-ht5x STEP: Creating a pod to test atomic-volume-subpath Apr 8 22:13:13.577: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-ht5x" in namespace "subpath-6109" to be "success or failure" Apr 8 22:13:13.581: INFO: Pod "pod-subpath-test-configmap-ht5x": Phase="Pending", Reason="", readiness=false. Elapsed: 3.659827ms Apr 8 22:13:15.585: INFO: Pod "pod-subpath-test-configmap-ht5x": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007825855s Apr 8 22:13:17.590: INFO: Pod "pod-subpath-test-configmap-ht5x": Phase="Running", Reason="", readiness=true. Elapsed: 4.012235201s Apr 8 22:13:19.593: INFO: Pod "pod-subpath-test-configmap-ht5x": Phase="Running", Reason="", readiness=true. Elapsed: 6.015923521s Apr 8 22:13:21.598: INFO: Pod "pod-subpath-test-configmap-ht5x": Phase="Running", Reason="", readiness=true. Elapsed: 8.020232537s Apr 8 22:13:23.602: INFO: Pod "pod-subpath-test-configmap-ht5x": Phase="Running", Reason="", readiness=true. Elapsed: 10.024644806s Apr 8 22:13:25.606: INFO: Pod "pod-subpath-test-configmap-ht5x": Phase="Running", Reason="", readiness=true. Elapsed: 12.028633484s Apr 8 22:13:27.609: INFO: Pod "pod-subpath-test-configmap-ht5x": Phase="Running", Reason="", readiness=true. Elapsed: 14.032117194s Apr 8 22:13:29.614: INFO: Pod "pod-subpath-test-configmap-ht5x": Phase="Running", Reason="", readiness=true. Elapsed: 16.036199573s Apr 8 22:13:31.618: INFO: Pod "pod-subpath-test-configmap-ht5x": Phase="Running", Reason="", readiness=true. Elapsed: 18.040288947s Apr 8 22:13:33.622: INFO: Pod "pod-subpath-test-configmap-ht5x": Phase="Running", Reason="", readiness=true. Elapsed: 20.044438287s Apr 8 22:13:35.626: INFO: Pod "pod-subpath-test-configmap-ht5x": Phase="Running", Reason="", readiness=true. Elapsed: 22.048786375s Apr 8 22:13:37.630: INFO: Pod "pod-subpath-test-configmap-ht5x": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.052217449s STEP: Saw pod success Apr 8 22:13:37.630: INFO: Pod "pod-subpath-test-configmap-ht5x" satisfied condition "success or failure" Apr 8 22:13:37.632: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-configmap-ht5x container test-container-subpath-configmap-ht5x: STEP: delete the pod Apr 8 22:13:37.658: INFO: Waiting for pod pod-subpath-test-configmap-ht5x to disappear Apr 8 22:13:37.671: INFO: Pod pod-subpath-test-configmap-ht5x no longer exists STEP: Deleting pod pod-subpath-test-configmap-ht5x Apr 8 22:13:37.671: INFO: Deleting pod "pod-subpath-test-configmap-ht5x" in namespace "subpath-6109" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 22:13:37.674: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-6109" for this suite. • [SLOW TEST:24.201 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":278,"completed":249,"skipped":4203,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 22:13:37.682: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-projected-all-test-volume-05e82ed5-2a02-4199-bf1b-485bb086a76f STEP: Creating secret with name secret-projected-all-test-volume-2cb5474d-9a40-4dc0-93cb-86d0eb34c433 STEP: Creating a pod to test Check all projections for projected volume plugin Apr 8 22:13:37.775: INFO: Waiting up to 5m0s for pod "projected-volume-367c5ade-6c81-46f1-9e39-ba1a09385101" in namespace "projected-8488" to be "success or failure" Apr 8 22:13:37.779: INFO: Pod "projected-volume-367c5ade-6c81-46f1-9e39-ba1a09385101": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012922ms Apr 8 22:13:39.783: INFO: Pod "projected-volume-367c5ade-6c81-46f1-9e39-ba1a09385101": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008348614s Apr 8 22:13:41.788: INFO: Pod "projected-volume-367c5ade-6c81-46f1-9e39-ba1a09385101": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012649163s STEP: Saw pod success Apr 8 22:13:41.788: INFO: Pod "projected-volume-367c5ade-6c81-46f1-9e39-ba1a09385101" satisfied condition "success or failure" Apr 8 22:13:41.791: INFO: Trying to get logs from node jerma-worker2 pod projected-volume-367c5ade-6c81-46f1-9e39-ba1a09385101 container projected-all-volume-test: STEP: delete the pod Apr 8 22:13:41.826: INFO: Waiting for pod projected-volume-367c5ade-6c81-46f1-9e39-ba1a09385101 to disappear Apr 8 22:13:41.839: INFO: Pod projected-volume-367c5ade-6c81-46f1-9e39-ba1a09385101 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 22:13:41.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8488" for this suite. •{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":278,"completed":250,"skipped":4221,"failed":0} SS ------------------------------ [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 22:13:41.848: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 8 22:13:41.926: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-4a38d22a-323c-43d9-91b8-0ab8feb77c07" in namespace "security-context-test-2663" to be "success or failure" Apr 8 22:13:41.930: INFO: Pod "busybox-privileged-false-4a38d22a-323c-43d9-91b8-0ab8feb77c07": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03698ms Apr 8 22:13:43.954: INFO: Pod "busybox-privileged-false-4a38d22a-323c-43d9-91b8-0ab8feb77c07": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027760595s Apr 8 22:13:45.958: INFO: Pod "busybox-privileged-false-4a38d22a-323c-43d9-91b8-0ab8feb77c07": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032162147s Apr 8 22:13:45.958: INFO: Pod "busybox-privileged-false-4a38d22a-323c-43d9-91b8-0ab8feb77c07" satisfied condition "success or failure" Apr 8 22:13:45.965: INFO: Got logs for pod "busybox-privileged-false-4a38d22a-323c-43d9-91b8-0ab8feb77c07": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 22:13:45.965: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-2663" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":251,"skipped":4223,"failed":0} ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 22:13:45.972: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-ccfcf60b-1f8d-46bd-8b2a-424818d68721 STEP: Creating a pod to test consume configMaps Apr 8 22:13:46.069: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-71931855-d243-4e0a-949f-18a311054a1a" in namespace "projected-1264" to be "success or failure" Apr 8 22:13:46.072: INFO: Pod "pod-projected-configmaps-71931855-d243-4e0a-949f-18a311054a1a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.903276ms Apr 8 22:13:48.079: INFO: Pod "pod-projected-configmaps-71931855-d243-4e0a-949f-18a311054a1a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010529564s Apr 8 22:13:50.083: INFO: Pod "pod-projected-configmaps-71931855-d243-4e0a-949f-18a311054a1a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014813849s STEP: Saw pod success Apr 8 22:13:50.083: INFO: Pod "pod-projected-configmaps-71931855-d243-4e0a-949f-18a311054a1a" satisfied condition "success or failure" Apr 8 22:13:50.087: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-71931855-d243-4e0a-949f-18a311054a1a container projected-configmap-volume-test: STEP: delete the pod Apr 8 22:13:50.104: INFO: Waiting for pod pod-projected-configmaps-71931855-d243-4e0a-949f-18a311054a1a to disappear Apr 8 22:13:50.108: INFO: Pod pod-projected-configmaps-71931855-d243-4e0a-949f-18a311054a1a no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 22:13:50.108: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1264" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":252,"skipped":4223,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 22:13:50.115: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Apr 8 22:13:50.243: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 22:13:57.140: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-5230" for this suite. • [SLOW TEST:7.047 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":278,"completed":253,"skipped":4240,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 22:13:57.162: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Apr 8 22:14:01.787: INFO: Successfully updated pod "pod-update-activedeadlineseconds-91fe171c-baed-4e0b-b647-f373336c9305" Apr 8 22:14:01.787: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-91fe171c-baed-4e0b-b647-f373336c9305" in namespace "pods-6300" to be "terminated due to deadline exceeded" Apr 8 22:14:01.791: INFO: Pod "pod-update-activedeadlineseconds-91fe171c-baed-4e0b-b647-f373336c9305": Phase="Running", Reason="", readiness=true. Elapsed: 4.482119ms Apr 8 22:14:03.796: INFO: Pod "pod-update-activedeadlineseconds-91fe171c-baed-4e0b-b647-f373336c9305": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.008816506s Apr 8 22:14:03.796: INFO: Pod "pod-update-activedeadlineseconds-91fe171c-baed-4e0b-b647-f373336c9305" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 22:14:03.796: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6300" for this suite. • [SLOW TEST:6.662 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":278,"completed":254,"skipped":4271,"failed":0} SS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 22:14:03.824: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-5356be87-959f-4b65-811b-5ff45d369bd0 STEP: Creating a pod to test consume configMaps Apr 8 22:14:03.907: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-021f31ae-aca3-4887-a4a4-21e2711fc26f" in namespace "projected-2106" to be "success or failure" Apr 8 22:14:03.912: INFO: Pod "pod-projected-configmaps-021f31ae-aca3-4887-a4a4-21e2711fc26f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.286184ms Apr 8 22:14:05.915: INFO: Pod "pod-projected-configmaps-021f31ae-aca3-4887-a4a4-21e2711fc26f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007838392s Apr 8 22:14:07.923: INFO: Pod "pod-projected-configmaps-021f31ae-aca3-4887-a4a4-21e2711fc26f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015346277s STEP: Saw pod success Apr 8 22:14:07.923: INFO: Pod "pod-projected-configmaps-021f31ae-aca3-4887-a4a4-21e2711fc26f" satisfied condition "success or failure" Apr 8 22:14:07.935: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-021f31ae-aca3-4887-a4a4-21e2711fc26f container projected-configmap-volume-test: STEP: delete the pod Apr 8 22:14:07.949: INFO: Waiting for pod pod-projected-configmaps-021f31ae-aca3-4887-a4a4-21e2711fc26f to disappear Apr 8 22:14:07.976: INFO: Pod pod-projected-configmaps-021f31ae-aca3-4887-a4a4-21e2711fc26f no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 22:14:07.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2106" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":255,"skipped":4273,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 22:14:07.985: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Apr 8 22:14:12.129: INFO: &Pod{ObjectMeta:{send-events-d1cec44d-a538-4fde-8f4f-cf614273b5cf events-4437 /api/v1/namespaces/events-4437/pods/send-events-d1cec44d-a538-4fde-8f4f-cf614273b5cf 91a0cca2-b115-45fb-9a42-8b44fcb666cd 6525258 0 2020-04-08 22:14:08 +0000 UTC map[name:foo time:46950479] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zkvzh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zkvzh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zkvzh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 22:14:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 22:14:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 22:14:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-08 22:14:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.86,StartTime:2020-04-08 22:14:08 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-08 22:14:10 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://b85bdd72111b7b00edc7c816bc4648c89c8d8e11a22f58813ee7a0ec3e5a3660,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.86,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod Apr 8 22:14:14.133: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Apr 8 22:14:16.150: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 22:14:16.155: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-4437" for this suite. • [SLOW TEST:8.222 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":278,"completed":256,"skipped":4292,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 22:14:16.208: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 22:15:16.314: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8962" for this suite. • [SLOW TEST:60.115 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":278,"completed":257,"skipped":4317,"failed":0} SSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 22:15:16.323: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Apr 8 22:15:24.459: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 8 22:15:24.464: INFO: Pod pod-with-poststart-http-hook still exists Apr 8 22:15:26.464: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 8 22:15:26.468: INFO: Pod pod-with-poststart-http-hook still exists Apr 8 22:15:28.464: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 8 22:15:28.468: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 22:15:28.468: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-5531" for this suite. • [SLOW TEST:12.153 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":278,"completed":258,"skipped":4320,"failed":0} [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 22:15:28.476: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on tmpfs Apr 8 22:15:28.550: INFO: Waiting up to 5m0s for pod "pod-cd900dee-a962-4ada-bff5-11183385d3a8" in namespace "emptydir-8732" to be "success or failure" Apr 8 22:15:28.553: INFO: Pod "pod-cd900dee-a962-4ada-bff5-11183385d3a8": Phase="Pending", Reason="", readiness=false. Elapsed: 3.493774ms Apr 8 22:15:30.557: INFO: Pod "pod-cd900dee-a962-4ada-bff5-11183385d3a8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007407882s Apr 8 22:15:32.562: INFO: Pod "pod-cd900dee-a962-4ada-bff5-11183385d3a8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011625701s STEP: Saw pod success Apr 8 22:15:32.562: INFO: Pod "pod-cd900dee-a962-4ada-bff5-11183385d3a8" satisfied condition "success or failure" Apr 8 22:15:32.565: INFO: Trying to get logs from node jerma-worker2 pod pod-cd900dee-a962-4ada-bff5-11183385d3a8 container test-container: STEP: delete the pod Apr 8 22:15:32.597: INFO: Waiting for pod pod-cd900dee-a962-4ada-bff5-11183385d3a8 to disappear Apr 8 22:15:32.612: INFO: Pod pod-cd900dee-a962-4ada-bff5-11183385d3a8 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 22:15:32.612: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8732" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":259,"skipped":4320,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 22:15:32.620: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Apr 8 22:15:32.970: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Apr 8 22:15:34.979: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721980932, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721980932, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721980933, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721980932, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 8 22:15:38.013: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 8 22:15:38.016: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 22:15:39.222: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-1322" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136 • [SLOW TEST:6.714 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":278,"completed":260,"skipped":4331,"failed":0} S ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 22:15:39.334: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-91ae2ed4-137a-4920-b08b-fcd49eae2fe7 STEP: Creating a pod to test consume secrets Apr 8 22:15:39.388: INFO: Waiting up to 5m0s for pod "pod-secrets-6524a99e-62b3-42ef-89db-47c39fe632a5" in namespace "secrets-6855" to be "success or failure" Apr 8 22:15:39.392: INFO: Pod "pod-secrets-6524a99e-62b3-42ef-89db-47c39fe632a5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.054654ms Apr 8 22:15:41.442: INFO: Pod "pod-secrets-6524a99e-62b3-42ef-89db-47c39fe632a5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053293626s Apr 8 22:15:43.446: INFO: Pod "pod-secrets-6524a99e-62b3-42ef-89db-47c39fe632a5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.057704811s STEP: Saw pod success Apr 8 22:15:43.446: INFO: Pod "pod-secrets-6524a99e-62b3-42ef-89db-47c39fe632a5" satisfied condition "success or failure" Apr 8 22:15:43.449: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-6524a99e-62b3-42ef-89db-47c39fe632a5 container secret-env-test: STEP: delete the pod Apr 8 22:15:43.480: INFO: Waiting for pod pod-secrets-6524a99e-62b3-42ef-89db-47c39fe632a5 to disappear Apr 8 22:15:43.507: INFO: Pod pod-secrets-6524a99e-62b3-42ef-89db-47c39fe632a5 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 22:15:43.507: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6855" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":278,"completed":261,"skipped":4332,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 22:15:43.518: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 8 22:15:43.635: INFO: Waiting up to 5m0s for pod "downwardapi-volume-dde1e5c5-ba9e-4fc0-9bc1-7e55f4a33efc" in namespace "projected-180" to be "success or failure" Apr 8 22:15:43.645: INFO: Pod "downwardapi-volume-dde1e5c5-ba9e-4fc0-9bc1-7e55f4a33efc": Phase="Pending", Reason="", readiness=false. Elapsed: 10.276348ms Apr 8 22:15:45.650: INFO: Pod "downwardapi-volume-dde1e5c5-ba9e-4fc0-9bc1-7e55f4a33efc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01464256s Apr 8 22:15:47.660: INFO: Pod "downwardapi-volume-dde1e5c5-ba9e-4fc0-9bc1-7e55f4a33efc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02546118s STEP: Saw pod success Apr 8 22:15:47.661: INFO: Pod "downwardapi-volume-dde1e5c5-ba9e-4fc0-9bc1-7e55f4a33efc" satisfied condition "success or failure" Apr 8 22:15:47.663: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-dde1e5c5-ba9e-4fc0-9bc1-7e55f4a33efc container client-container: STEP: delete the pod Apr 8 22:15:47.717: INFO: Waiting for pod downwardapi-volume-dde1e5c5-ba9e-4fc0-9bc1-7e55f4a33efc to disappear Apr 8 22:15:47.723: INFO: Pod downwardapi-volume-dde1e5c5-ba9e-4fc0-9bc1-7e55f4a33efc no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 22:15:47.724: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-180" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":262,"skipped":4340,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 22:15:47.733: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Apr 8 22:15:55.902: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 8 22:15:55.908: INFO: Pod pod-with-prestop-exec-hook still exists Apr 8 22:15:57.908: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 8 22:15:57.912: INFO: Pod pod-with-prestop-exec-hook still exists Apr 8 22:15:59.908: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 8 22:15:59.912: INFO: Pod pod-with-prestop-exec-hook still exists Apr 8 22:16:01.908: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 8 22:16:01.912: INFO: Pod pod-with-prestop-exec-hook still exists Apr 8 22:16:03.908: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 8 22:16:03.911: INFO: Pod pod-with-prestop-exec-hook still exists Apr 8 22:16:05.908: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 8 22:16:05.912: INFO: Pod pod-with-prestop-exec-hook still exists Apr 8 22:16:07.908: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 8 22:16:07.912: INFO: Pod pod-with-prestop-exec-hook still exists Apr 8 22:16:09.908: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 8 22:16:09.911: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 22:16:09.918: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-3751" for this suite. • [SLOW TEST:22.192 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":278,"completed":263,"skipped":4351,"failed":0} SSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 22:16:09.925: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting the proxy server Apr 8 22:16:10.013: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 22:16:10.093: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9993" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":278,"completed":264,"skipped":4362,"failed":0} ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 22:16:10.107: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 8 22:16:10.244: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"6296dc55-3427-4fbc-9510-dbe8ffdc8baf", Controller:(*bool)(0xc00372cea2), BlockOwnerDeletion:(*bool)(0xc00372cea3)}} Apr 8 22:16:10.262: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"33363867-406c-41a2-ad1a-403f2a19fce0", Controller:(*bool)(0xc003c939e2), BlockOwnerDeletion:(*bool)(0xc003c939e3)}} Apr 8 22:16:10.298: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"446db502-1a3b-4014-a805-3904c16a229e", Controller:(*bool)(0xc003dbc9fa), BlockOwnerDeletion:(*bool)(0xc003dbc9fb)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 22:16:15.451: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6302" for this suite. • [SLOW TEST:5.352 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":278,"completed":265,"skipped":4362,"failed":0} SSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 22:16:15.459: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating replication controller my-hostname-basic-cf870236-56bf-4cbf-98e2-23f0f62242cb Apr 8 22:16:15.532: INFO: Pod name my-hostname-basic-cf870236-56bf-4cbf-98e2-23f0f62242cb: Found 0 pods out of 1 Apr 8 22:16:20.589: INFO: Pod name my-hostname-basic-cf870236-56bf-4cbf-98e2-23f0f62242cb: Found 1 pods out of 1 Apr 8 22:16:20.589: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-cf870236-56bf-4cbf-98e2-23f0f62242cb" are running Apr 8 22:16:20.597: INFO: Pod "my-hostname-basic-cf870236-56bf-4cbf-98e2-23f0f62242cb-hrjkr" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-08 22:16:15 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-08 22:16:18 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-08 22:16:18 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-08 22:16:15 +0000 UTC Reason: Message:}]) Apr 8 22:16:20.597: INFO: Trying to dial the pod Apr 8 22:16:25.608: INFO: Controller my-hostname-basic-cf870236-56bf-4cbf-98e2-23f0f62242cb: Got expected result from replica 1 [my-hostname-basic-cf870236-56bf-4cbf-98e2-23f0f62242cb-hrjkr]: "my-hostname-basic-cf870236-56bf-4cbf-98e2-23f0f62242cb-hrjkr", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 22:16:25.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-2451" for this suite. • [SLOW TEST:10.156 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":278,"completed":266,"skipped":4369,"failed":0} SS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 22:16:25.616: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-upd-ced36173-e170-4ea0-bc73-ed55d689cc19 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-ced36173-e170-4ea0-bc73-ed55d689cc19 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 22:17:38.093: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3342" for this suite. • [SLOW TEST:72.487 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":267,"skipped":4371,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 22:17:38.103: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-2917 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 8 22:17:38.136: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Apr 8 22:18:02.337: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.95:8080/dial?request=hostname&protocol=udp&host=10.244.1.21&port=8081&tries=1'] Namespace:pod-network-test-2917 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 8 22:18:02.337: INFO: >>> kubeConfig: /root/.kube/config I0408 22:18:02.375045 6 log.go:172] (0xc002779290) (0xc0007f2aa0) Create stream I0408 22:18:02.375079 6 log.go:172] (0xc002779290) (0xc0007f2aa0) Stream added, broadcasting: 1 I0408 22:18:02.376644 6 log.go:172] (0xc002779290) Reply frame received for 1 I0408 22:18:02.376691 6 log.go:172] (0xc002779290) (0xc0015f25a0) Create stream I0408 22:18:02.376712 6 log.go:172] (0xc002779290) (0xc0015f25a0) Stream added, broadcasting: 3 I0408 22:18:02.377885 6 log.go:172] (0xc002779290) Reply frame received for 3 I0408 22:18:02.377944 6 log.go:172] (0xc002779290) (0xc0007f2f00) Create stream I0408 22:18:02.377968 6 log.go:172] (0xc002779290) (0xc0007f2f00) Stream added, broadcasting: 5 I0408 22:18:02.378863 6 log.go:172] (0xc002779290) Reply frame received for 5 I0408 22:18:02.475191 6 log.go:172] (0xc002779290) Data frame received for 3 I0408 22:18:02.475213 6 log.go:172] (0xc0015f25a0) (3) Data frame handling I0408 22:18:02.475227 6 log.go:172] (0xc0015f25a0) (3) Data frame sent I0408 22:18:02.475851 6 log.go:172] (0xc002779290) Data frame received for 3 I0408 22:18:02.475887 6 log.go:172] (0xc0015f25a0) (3) Data frame handling I0408 22:18:02.476048 6 log.go:172] (0xc002779290) Data frame received for 5 I0408 22:18:02.476069 6 log.go:172] (0xc0007f2f00) (5) Data frame handling I0408 22:18:02.477978 6 log.go:172] (0xc002779290) Data frame received for 1 I0408 22:18:02.478030 6 log.go:172] (0xc0007f2aa0) (1) Data frame handling I0408 22:18:02.478072 6 log.go:172] (0xc0007f2aa0) (1) Data frame sent I0408 22:18:02.478100 6 log.go:172] (0xc002779290) (0xc0007f2aa0) Stream removed, broadcasting: 1 I0408 22:18:02.478145 6 log.go:172] (0xc002779290) Go away received I0408 22:18:02.478180 6 log.go:172] (0xc002779290) (0xc0007f2aa0) Stream removed, broadcasting: 1 I0408 22:18:02.478195 6 log.go:172] (0xc002779290) (0xc0015f25a0) Stream removed, broadcasting: 3 I0408 22:18:02.478201 6 log.go:172] (0xc002779290) (0xc0007f2f00) Stream removed, broadcasting: 5 Apr 8 22:18:02.478: INFO: Waiting for responses: map[] Apr 8 22:18:02.481: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.95:8080/dial?request=hostname&protocol=udp&host=10.244.2.94&port=8081&tries=1'] Namespace:pod-network-test-2917 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 8 22:18:02.482: INFO: >>> kubeConfig: /root/.kube/config I0408 22:18:02.512873 6 log.go:172] (0xc001a6edc0) (0xc0001010e0) Create stream I0408 22:18:02.512912 6 log.go:172] (0xc001a6edc0) (0xc0001010e0) Stream added, broadcasting: 1 I0408 22:18:02.515139 6 log.go:172] (0xc001a6edc0) Reply frame received for 1 I0408 22:18:02.515198 6 log.go:172] (0xc001a6edc0) (0xc0007f3b80) Create stream I0408 22:18:02.515216 6 log.go:172] (0xc001a6edc0) (0xc0007f3b80) Stream added, broadcasting: 3 I0408 22:18:02.516264 6 log.go:172] (0xc001a6edc0) Reply frame received for 3 I0408 22:18:02.516307 6 log.go:172] (0xc001a6edc0) (0xc000b15220) Create stream I0408 22:18:02.516322 6 log.go:172] (0xc001a6edc0) (0xc000b15220) Stream added, broadcasting: 5 I0408 22:18:02.517379 6 log.go:172] (0xc001a6edc0) Reply frame received for 5 I0408 22:18:02.570995 6 log.go:172] (0xc001a6edc0) Data frame received for 3 I0408 22:18:02.571025 6 log.go:172] (0xc0007f3b80) (3) Data frame handling I0408 22:18:02.571044 6 log.go:172] (0xc0007f3b80) (3) Data frame sent I0408 22:18:02.571420 6 log.go:172] (0xc001a6edc0) Data frame received for 3 I0408 22:18:02.571452 6 log.go:172] (0xc001a6edc0) Data frame received for 5 I0408 22:18:02.571490 6 log.go:172] (0xc000b15220) (5) Data frame handling I0408 22:18:02.571515 6 log.go:172] (0xc0007f3b80) (3) Data frame handling I0408 22:18:02.573360 6 log.go:172] (0xc001a6edc0) Data frame received for 1 I0408 22:18:02.573380 6 log.go:172] (0xc0001010e0) (1) Data frame handling I0408 22:18:02.573394 6 log.go:172] (0xc0001010e0) (1) Data frame sent I0408 22:18:02.573417 6 log.go:172] (0xc001a6edc0) (0xc0001010e0) Stream removed, broadcasting: 1 I0408 22:18:02.573497 6 log.go:172] (0xc001a6edc0) (0xc0001010e0) Stream removed, broadcasting: 1 I0408 22:18:02.573512 6 log.go:172] (0xc001a6edc0) (0xc0007f3b80) Stream removed, broadcasting: 3 I0408 22:18:02.573654 6 log.go:172] (0xc001a6edc0) Go away received I0408 22:18:02.573759 6 log.go:172] (0xc001a6edc0) (0xc000b15220) Stream removed, broadcasting: 5 Apr 8 22:18:02.573: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 22:18:02.573: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-2917" for this suite. • [SLOW TEST:24.479 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":268,"skipped":4385,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 22:18:02.582: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 8 22:18:02.636: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7a6fd8a9-feb9-48f1-8fd2-f277373d64e0" in namespace "downward-api-2512" to be "success or failure" Apr 8 22:18:02.640: INFO: Pod "downwardapi-volume-7a6fd8a9-feb9-48f1-8fd2-f277373d64e0": Phase="Pending", Reason="", readiness=false. Elapsed: 3.962749ms Apr 8 22:18:04.644: INFO: Pod "downwardapi-volume-7a6fd8a9-feb9-48f1-8fd2-f277373d64e0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008045765s Apr 8 22:18:06.649: INFO: Pod "downwardapi-volume-7a6fd8a9-feb9-48f1-8fd2-f277373d64e0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012542328s STEP: Saw pod success Apr 8 22:18:06.649: INFO: Pod "downwardapi-volume-7a6fd8a9-feb9-48f1-8fd2-f277373d64e0" satisfied condition "success or failure" Apr 8 22:18:06.652: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-7a6fd8a9-feb9-48f1-8fd2-f277373d64e0 container client-container: STEP: delete the pod Apr 8 22:18:06.724: INFO: Waiting for pod downwardapi-volume-7a6fd8a9-feb9-48f1-8fd2-f277373d64e0 to disappear Apr 8 22:18:06.736: INFO: Pod downwardapi-volume-7a6fd8a9-feb9-48f1-8fd2-f277373d64e0 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 22:18:06.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2512" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":269,"skipped":4393,"failed":0} ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 22:18:06.742: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-secret-zbs8 STEP: Creating a pod to test atomic-volume-subpath Apr 8 22:18:06.816: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-zbs8" in namespace "subpath-8850" to be "success or failure" Apr 8 22:18:06.857: INFO: Pod "pod-subpath-test-secret-zbs8": Phase="Pending", Reason="", readiness=false. Elapsed: 41.056984ms Apr 8 22:18:08.861: INFO: Pod "pod-subpath-test-secret-zbs8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044371276s Apr 8 22:18:10.864: INFO: Pod "pod-subpath-test-secret-zbs8": Phase="Running", Reason="", readiness=true. Elapsed: 4.047874858s Apr 8 22:18:12.869: INFO: Pod "pod-subpath-test-secret-zbs8": Phase="Running", Reason="", readiness=true. Elapsed: 6.052400483s Apr 8 22:18:14.873: INFO: Pod "pod-subpath-test-secret-zbs8": Phase="Running", Reason="", readiness=true. Elapsed: 8.057238634s Apr 8 22:18:16.878: INFO: Pod "pod-subpath-test-secret-zbs8": Phase="Running", Reason="", readiness=true. Elapsed: 10.061417348s Apr 8 22:18:18.882: INFO: Pod "pod-subpath-test-secret-zbs8": Phase="Running", Reason="", readiness=true. Elapsed: 12.065963376s Apr 8 22:18:20.886: INFO: Pod "pod-subpath-test-secret-zbs8": Phase="Running", Reason="", readiness=true. Elapsed: 14.070220062s Apr 8 22:18:22.893: INFO: Pod "pod-subpath-test-secret-zbs8": Phase="Running", Reason="", readiness=true. Elapsed: 16.076365311s Apr 8 22:18:24.896: INFO: Pod "pod-subpath-test-secret-zbs8": Phase="Running", Reason="", readiness=true. Elapsed: 18.080161061s Apr 8 22:18:26.900: INFO: Pod "pod-subpath-test-secret-zbs8": Phase="Running", Reason="", readiness=true. Elapsed: 20.084047632s Apr 8 22:18:28.904: INFO: Pod "pod-subpath-test-secret-zbs8": Phase="Running", Reason="", readiness=true. Elapsed: 22.087432101s Apr 8 22:18:30.906: INFO: Pod "pod-subpath-test-secret-zbs8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.090011073s STEP: Saw pod success Apr 8 22:18:30.906: INFO: Pod "pod-subpath-test-secret-zbs8" satisfied condition "success or failure" Apr 8 22:18:30.908: INFO: Trying to get logs from node jerma-worker2 pod pod-subpath-test-secret-zbs8 container test-container-subpath-secret-zbs8: STEP: delete the pod Apr 8 22:18:30.963: INFO: Waiting for pod pod-subpath-test-secret-zbs8 to disappear Apr 8 22:18:30.982: INFO: Pod pod-subpath-test-secret-zbs8 no longer exists STEP: Deleting pod pod-subpath-test-secret-zbs8 Apr 8 22:18:30.982: INFO: Deleting pod "pod-subpath-test-secret-zbs8" in namespace "subpath-8850" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 22:18:30.984: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-8850" for this suite. • [SLOW TEST:24.248 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":278,"completed":270,"skipped":4393,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 22:18:30.991: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Apr 8 22:18:31.060: INFO: Waiting up to 5m0s for pod "downward-api-80dd4b8d-0c86-4a0f-affd-4f62d1e77907" in namespace "downward-api-551" to be "success or failure" Apr 8 22:18:31.274: INFO: Pod "downward-api-80dd4b8d-0c86-4a0f-affd-4f62d1e77907": Phase="Pending", Reason="", readiness=false. Elapsed: 213.988896ms Apr 8 22:18:33.285: INFO: Pod "downward-api-80dd4b8d-0c86-4a0f-affd-4f62d1e77907": Phase="Pending", Reason="", readiness=false. Elapsed: 2.225402055s Apr 8 22:18:35.289: INFO: Pod "downward-api-80dd4b8d-0c86-4a0f-affd-4f62d1e77907": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.22914277s STEP: Saw pod success Apr 8 22:18:35.289: INFO: Pod "downward-api-80dd4b8d-0c86-4a0f-affd-4f62d1e77907" satisfied condition "success or failure" Apr 8 22:18:35.292: INFO: Trying to get logs from node jerma-worker2 pod downward-api-80dd4b8d-0c86-4a0f-affd-4f62d1e77907 container dapi-container: STEP: delete the pod Apr 8 22:18:35.342: INFO: Waiting for pod downward-api-80dd4b8d-0c86-4a0f-affd-4f62d1e77907 to disappear Apr 8 22:18:35.347: INFO: Pod downward-api-80dd4b8d-0c86-4a0f-affd-4f62d1e77907 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 22:18:35.347: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-551" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":278,"completed":271,"skipped":4423,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 22:18:35.358: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-map-b418fd10-c87b-4646-978a-14704538e9a8 STEP: Creating a pod to test consume secrets Apr 8 22:18:35.423: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-6f0b0de5-4144-45a9-8b7d-3bf188436b8c" in namespace "projected-6349" to be "success or failure" Apr 8 22:18:35.441: INFO: Pod "pod-projected-secrets-6f0b0de5-4144-45a9-8b7d-3bf188436b8c": Phase="Pending", Reason="", readiness=false. Elapsed: 17.76067ms Apr 8 22:18:37.445: INFO: Pod "pod-projected-secrets-6f0b0de5-4144-45a9-8b7d-3bf188436b8c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022081485s Apr 8 22:18:39.451: INFO: Pod "pod-projected-secrets-6f0b0de5-4144-45a9-8b7d-3bf188436b8c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027590524s STEP: Saw pod success Apr 8 22:18:39.451: INFO: Pod "pod-projected-secrets-6f0b0de5-4144-45a9-8b7d-3bf188436b8c" satisfied condition "success or failure" Apr 8 22:18:39.455: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-6f0b0de5-4144-45a9-8b7d-3bf188436b8c container projected-secret-volume-test: STEP: delete the pod Apr 8 22:18:39.485: INFO: Waiting for pod pod-projected-secrets-6f0b0de5-4144-45a9-8b7d-3bf188436b8c to disappear Apr 8 22:18:39.497: INFO: Pod pod-projected-secrets-6f0b0de5-4144-45a9-8b7d-3bf188436b8c no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 22:18:39.498: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6349" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":272,"skipped":4461,"failed":0} ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 22:18:39.505: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod busybox-53a4e41e-b996-4129-858a-9573f2f49db3 in namespace container-probe-6479 Apr 8 22:18:43.633: INFO: Started pod busybox-53a4e41e-b996-4129-858a-9573f2f49db3 in namespace container-probe-6479 STEP: checking the pod's current state and verifying that restartCount is present Apr 8 22:18:43.636: INFO: Initial restart count of pod busybox-53a4e41e-b996-4129-858a-9573f2f49db3 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 22:22:44.376: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6479" for this suite. • [SLOW TEST:244.902 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":273,"skipped":4461,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 22:22:44.408: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 8 22:22:44.843: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 8 22:22:46.919: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721981364, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721981364, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721981364, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721981364, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 8 22:22:49.985: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 22:22:50.037: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6409" for this suite. STEP: Destroying namespace "webhook-6409-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.719 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":278,"completed":274,"skipped":4492,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 22:22:50.127: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 22:22:55.321: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-2955" for this suite. • [SLOW TEST:5.202 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":278,"completed":275,"skipped":4536,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 22:22:55.329: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service endpoint-test2 in namespace services-7040 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-7040 to expose endpoints map[] Apr 8 22:22:55.452: INFO: Get endpoints failed (11.148226ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found Apr 8 22:22:56.456: INFO: successfully validated that service endpoint-test2 in namespace services-7040 exposes endpoints map[] (1.015086116s elapsed) STEP: Creating pod pod1 in namespace services-7040 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-7040 to expose endpoints map[pod1:[80]] Apr 8 22:22:59.496: INFO: successfully validated that service endpoint-test2 in namespace services-7040 exposes endpoints map[pod1:[80]] (3.032160721s elapsed) STEP: Creating pod pod2 in namespace services-7040 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-7040 to expose endpoints map[pod1:[80] pod2:[80]] Apr 8 22:23:02.611: INFO: successfully validated that service endpoint-test2 in namespace services-7040 exposes endpoints map[pod1:[80] pod2:[80]] (3.110945141s elapsed) STEP: Deleting pod pod1 in namespace services-7040 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-7040 to expose endpoints map[pod2:[80]] Apr 8 22:23:03.654: INFO: successfully validated that service endpoint-test2 in namespace services-7040 exposes endpoints map[pod2:[80]] (1.038666086s elapsed) STEP: Deleting pod pod2 in namespace services-7040 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-7040 to expose endpoints map[] Apr 8 22:23:04.667: INFO: successfully validated that service endpoint-test2 in namespace services-7040 exposes endpoints map[] (1.008694052s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 22:23:04.730: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7040" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:9.411 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":278,"completed":276,"skipped":4553,"failed":0} SSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 22:23:04.741: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 8 22:23:08.903: INFO: Waiting up to 5m0s for pod "client-envvars-a9e8ae38-01a5-447b-924e-586229d6f884" in namespace "pods-433" to be "success or failure" Apr 8 22:23:08.916: INFO: Pod "client-envvars-a9e8ae38-01a5-447b-924e-586229d6f884": Phase="Pending", Reason="", readiness=false. Elapsed: 13.357297ms Apr 8 22:23:10.920: INFO: Pod "client-envvars-a9e8ae38-01a5-447b-924e-586229d6f884": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017310693s Apr 8 22:23:12.924: INFO: Pod "client-envvars-a9e8ae38-01a5-447b-924e-586229d6f884": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021484005s STEP: Saw pod success Apr 8 22:23:12.924: INFO: Pod "client-envvars-a9e8ae38-01a5-447b-924e-586229d6f884" satisfied condition "success or failure" Apr 8 22:23:12.927: INFO: Trying to get logs from node jerma-worker2 pod client-envvars-a9e8ae38-01a5-447b-924e-586229d6f884 container env3cont: STEP: delete the pod Apr 8 22:23:12.970: INFO: Waiting for pod client-envvars-a9e8ae38-01a5-447b-924e-586229d6f884 to disappear Apr 8 22:23:12.980: INFO: Pod client-envvars-a9e8ae38-01a5-447b-924e-586229d6f884 no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 22:23:12.980: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-433" for this suite. • [SLOW TEST:8.246 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":278,"completed":277,"skipped":4560,"failed":0} SS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 8 22:23:12.987: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 8 22:23:13.072: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3f9f6dbe-4f1f-4686-ab2c-18ff61f312b3" in namespace "downward-api-1062" to be "success or failure" Apr 8 22:23:13.075: INFO: Pod "downwardapi-volume-3f9f6dbe-4f1f-4686-ab2c-18ff61f312b3": Phase="Pending", Reason="", readiness=false. Elapsed: 3.45852ms Apr 8 22:23:15.082: INFO: Pod "downwardapi-volume-3f9f6dbe-4f1f-4686-ab2c-18ff61f312b3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010215035s Apr 8 22:23:17.086: INFO: Pod "downwardapi-volume-3f9f6dbe-4f1f-4686-ab2c-18ff61f312b3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014734285s STEP: Saw pod success Apr 8 22:23:17.087: INFO: Pod "downwardapi-volume-3f9f6dbe-4f1f-4686-ab2c-18ff61f312b3" satisfied condition "success or failure" Apr 8 22:23:17.090: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-3f9f6dbe-4f1f-4686-ab2c-18ff61f312b3 container client-container: STEP: delete the pod Apr 8 22:23:17.126: INFO: Waiting for pod downwardapi-volume-3f9f6dbe-4f1f-4686-ab2c-18ff61f312b3 to disappear Apr 8 22:23:17.130: INFO: Pod downwardapi-volume-3f9f6dbe-4f1f-4686-ab2c-18ff61f312b3 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 8 22:23:17.130: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1062" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":278,"skipped":4562,"failed":0} SSApr 8 22:23:17.136: INFO: Running AfterSuite actions on all nodes Apr 8 22:23:17.136: INFO: Running AfterSuite actions on node 1 Apr 8 22:23:17.136: INFO: Skipping dumping logs from cluster {"msg":"Test Suite completed","total":278,"completed":278,"skipped":4564,"failed":0} Ran 278 of 4842 Specs in 4586.066 seconds SUCCESS! -- 278 Passed | 0 Failed | 0 Pending | 4564 Skipped PASS