I0404 17:13:56.650972 7 test_context.go:427] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0404 17:13:56.651138 7 e2e.go:129] Starting e2e run "74e16cc9-23f1-4871-9151-dd88cbcd20ec" on Ginkgo node 1 {"msg":"Test Suite starting","total":281,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1586020435 - Will randomize all specs Will run 281 of 4997 specs Apr 4 17:13:56.703: INFO: >>> kubeConfig: /root/.kube/config Apr 4 17:13:56.706: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Apr 4 17:13:56.729: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Apr 4 17:13:56.764: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Apr 4 17:13:56.764: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 4 17:13:56.764: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Apr 4 17:13:56.775: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Apr 4 17:13:56.775: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Apr 4 17:13:56.775: INFO: e2e test version: v1.19.0-alpha.1.325+47f5d2923f3f35 Apr 4 17:13:56.776: INFO: kube-apiserver version: v1.17.0 Apr 4 17:13:56.776: INFO: >>> kubeConfig: /root/.kube/config Apr 4 17:13:56.781: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 17:13:56.782: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime Apr 4 17:13:56.839: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 4 17:14:31.104: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 17:14:31.742: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-880" for this suite. • [SLOW TEST:34.973 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:134 should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":281,"completed":1,"skipped":22,"failed":0} SSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 17:14:31.755: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 17:14:38.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-6132" for this suite. STEP: Destroying namespace "nsdeletetest-6946" for this suite. Apr 4 17:14:38.060: INFO: Namespace nsdeletetest-6946 was already deleted STEP: Destroying namespace "nsdeletetest-5143" for this suite. • [SLOW TEST:6.309 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":281,"completed":2,"skipped":25,"failed":0} SSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 17:14:38.064: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 17:15:24.184: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-3430" for this suite. • [SLOW TEST:46.126 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when scheduling a read only busybox container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:188 should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":281,"completed":3,"skipped":32,"failed":0} S ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 17:15:24.191: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91 Apr 4 17:15:24.284: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 4 17:15:24.313: INFO: Waiting for terminating namespaces to be deleted... Apr 4 17:15:24.314: INFO: Logging pods the kubelet thinks is on node latest-worker before test Apr 4 17:15:24.343: INFO: kindnet-vnjgh from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 4 17:15:24.343: INFO: Container kindnet-cni ready: true, restart count 0 Apr 4 17:15:24.343: INFO: kube-proxy-s9v6p from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 4 17:15:24.343: INFO: Container kube-proxy ready: true, restart count 0 Apr 4 17:15:24.343: INFO: busybox-readonly-fs607a1f01-016c-4698-a693-40593a3324bb from kubelet-test-3430 started at 2020-04-04 17:14:38 +0000 UTC (1 container statuses recorded) Apr 4 17:15:24.343: INFO: Container busybox-readonly-fs607a1f01-016c-4698-a693-40593a3324bb ready: true, restart count 0 Apr 4 17:15:24.343: INFO: Logging pods the kubelet thinks is on node latest-worker2 before test Apr 4 17:15:24.356: INFO: kindnet-zq6gp from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 4 17:15:24.356: INFO: Container kindnet-cni ready: true, restart count 0 Apr 4 17:15:24.356: INFO: kube-proxy-c5xlk from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 4 17:15:24.356: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-6886b1d0-6c2a-4fbe-9e34-ee22b98efc9a 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-6886b1d0-6c2a-4fbe-9e34-ee22b98efc9a off the node latest-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-6886b1d0-6c2a-4fbe-9e34-ee22b98efc9a [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 17:21:08.950: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-7234" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82 • [SLOW TEST:344.767 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":281,"completed":4,"skipped":33,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 17:21:08.958: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 17:21:26.627: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4659" for this suite. • [SLOW TEST:17.964 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":281,"completed":5,"skipped":39,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 17:21:26.922: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 17:21:43.936: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-5919" for this suite. • [SLOW TEST:17.021 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":281,"completed":6,"skipped":57,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 17:21:43.943: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Apr 4 17:21:44.030: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 17:21:45.196: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-3464" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":281,"completed":7,"skipped":91,"failed":0} SSSS ------------------------------ [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 17:21:45.204: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:249 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating Agnhost RC Apr 4 17:21:45.267: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7207' Apr 4 17:21:47.909: INFO: stderr: "" Apr 4 17:21:47.909: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Apr 4 17:21:48.913: INFO: Selector matched 1 pods for map[app:agnhost] Apr 4 17:21:48.913: INFO: Found 0 / 1 Apr 4 17:21:49.912: INFO: Selector matched 1 pods for map[app:agnhost] Apr 4 17:21:49.912: INFO: Found 0 / 1 Apr 4 17:21:50.995: INFO: Selector matched 1 pods for map[app:agnhost] Apr 4 17:21:50.995: INFO: Found 0 / 1 Apr 4 17:21:51.912: INFO: Selector matched 1 pods for map[app:agnhost] Apr 4 17:21:51.912: INFO: Found 0 / 1 Apr 4 17:21:52.913: INFO: Selector matched 1 pods for map[app:agnhost] Apr 4 17:21:52.913: INFO: Found 0 / 1 Apr 4 17:21:53.913: INFO: Selector matched 1 pods for map[app:agnhost] Apr 4 17:21:53.913: INFO: Found 0 / 1 Apr 4 17:21:54.912: INFO: Selector matched 1 pods for map[app:agnhost] Apr 4 17:21:54.912: INFO: Found 0 / 1 Apr 4 17:21:55.914: INFO: Selector matched 1 pods for map[app:agnhost] Apr 4 17:21:55.914: INFO: Found 0 / 1 Apr 4 17:21:56.914: INFO: Selector matched 1 pods for map[app:agnhost] Apr 4 17:21:56.914: INFO: Found 0 / 1 Apr 4 17:21:58.011: INFO: Selector matched 1 pods for map[app:agnhost] Apr 4 17:21:58.011: INFO: Found 0 / 1 Apr 4 17:21:58.913: INFO: Selector matched 1 pods for map[app:agnhost] Apr 4 17:21:58.913: INFO: Found 0 / 1 Apr 4 17:21:59.922: INFO: Selector matched 1 pods for map[app:agnhost] Apr 4 17:21:59.922: INFO: Found 1 / 1 Apr 4 17:21:59.922: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Apr 4 17:21:59.923: INFO: Selector matched 1 pods for map[app:agnhost] Apr 4 17:21:59.923: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Apr 4 17:21:59.923: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config patch pod agnhost-master-72sgt --namespace=kubectl-7207 -p {"metadata":{"annotations":{"x":"y"}}}' Apr 4 17:22:00.024: INFO: stderr: "" Apr 4 17:22:00.024: INFO: stdout: "pod/agnhost-master-72sgt patched\n" STEP: checking annotations Apr 4 17:22:00.047: INFO: Selector matched 1 pods for map[app:agnhost] Apr 4 17:22:00.047: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 17:22:00.047: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7207" for this suite. • [SLOW TEST:14.865 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1393 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":281,"completed":8,"skipped":95,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 17:22:00.070: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 17:22:31.315: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-6803" for this suite. STEP: Destroying namespace "nsdeletetest-9239" for this suite. Apr 4 17:22:31.322: INFO: Namespace nsdeletetest-9239 was already deleted STEP: Destroying namespace "nsdeletetest-8072" for this suite. • [SLOW TEST:31.255 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":281,"completed":9,"skipped":112,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 17:22:31.326: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod Apr 4 17:22:35.941: INFO: Successfully updated pod "labelsupdate33545cbf-3644-4d20-b7fd-5621e1693649" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 17:22:38.053: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5673" for this suite. • [SLOW TEST:6.733 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":281,"completed":10,"skipped":124,"failed":0} [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 17:22:38.059: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on node default medium Apr 4 17:22:38.677: INFO: Waiting up to 5m0s for pod "pod-5b097b6c-6cab-434c-b114-c8cdf5dfeb12" in namespace "emptydir-7346" to be "Succeeded or Failed" Apr 4 17:22:38.779: INFO: Pod "pod-5b097b6c-6cab-434c-b114-c8cdf5dfeb12": Phase="Pending", Reason="", readiness=false. Elapsed: 101.384015ms Apr 4 17:22:40.805: INFO: Pod "pod-5b097b6c-6cab-434c-b114-c8cdf5dfeb12": Phase="Pending", Reason="", readiness=false. Elapsed: 2.127795658s Apr 4 17:22:42.807: INFO: Pod "pod-5b097b6c-6cab-434c-b114-c8cdf5dfeb12": Phase="Running", Reason="", readiness=true. Elapsed: 4.130169071s Apr 4 17:22:44.828: INFO: Pod "pod-5b097b6c-6cab-434c-b114-c8cdf5dfeb12": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.151126411s STEP: Saw pod success Apr 4 17:22:44.828: INFO: Pod "pod-5b097b6c-6cab-434c-b114-c8cdf5dfeb12" satisfied condition "Succeeded or Failed" Apr 4 17:22:44.831: INFO: Trying to get logs from node latest-worker2 pod pod-5b097b6c-6cab-434c-b114-c8cdf5dfeb12 container test-container: STEP: delete the pod Apr 4 17:22:44.944: INFO: Waiting for pod pod-5b097b6c-6cab-434c-b114-c8cdf5dfeb12 to disappear Apr 4 17:22:44.966: INFO: Pod pod-5b097b6c-6cab-434c-b114-c8cdf5dfeb12 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 17:22:44.966: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7346" for this suite. • [SLOW TEST:6.914 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:43 should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":281,"completed":11,"skipped":124,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 17:22:44.974: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:249 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: validating api versions Apr 4 17:22:45.063: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config api-versions' Apr 4 17:22:45.243: INFO: stderr: "" Apr 4 17:22:45.243: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 17:22:45.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5976" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":281,"completed":12,"skipped":142,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 17:22:45.251: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-map-126d6ee8-c85a-4449-a628-a4390d8fba47 STEP: Creating a pod to test consume configMaps Apr 4 17:22:45.348: INFO: Waiting up to 5m0s for pod "pod-configmaps-5eb64946-7a38-4d3f-be9e-d2b5cc539ded" in namespace "configmap-5627" to be "Succeeded or Failed" Apr 4 17:22:45.361: INFO: Pod "pod-configmaps-5eb64946-7a38-4d3f-be9e-d2b5cc539ded": Phase="Pending", Reason="", readiness=false. Elapsed: 13.447348ms Apr 4 17:22:47.776: INFO: Pod "pod-configmaps-5eb64946-7a38-4d3f-be9e-d2b5cc539ded": Phase="Pending", Reason="", readiness=false. Elapsed: 2.428659334s Apr 4 17:22:49.779: INFO: Pod "pod-configmaps-5eb64946-7a38-4d3f-be9e-d2b5cc539ded": Phase="Pending", Reason="", readiness=false. Elapsed: 4.43137021s Apr 4 17:22:51.803: INFO: Pod "pod-configmaps-5eb64946-7a38-4d3f-be9e-d2b5cc539ded": Phase="Pending", Reason="", readiness=false. Elapsed: 6.454857981s Apr 4 17:22:53.806: INFO: Pod "pod-configmaps-5eb64946-7a38-4d3f-be9e-d2b5cc539ded": Phase="Running", Reason="", readiness=true. Elapsed: 8.458589493s Apr 4 17:22:55.810: INFO: Pod "pod-configmaps-5eb64946-7a38-4d3f-be9e-d2b5cc539ded": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.462058619s STEP: Saw pod success Apr 4 17:22:55.810: INFO: Pod "pod-configmaps-5eb64946-7a38-4d3f-be9e-d2b5cc539ded" satisfied condition "Succeeded or Failed" Apr 4 17:22:55.812: INFO: Trying to get logs from node latest-worker pod pod-configmaps-5eb64946-7a38-4d3f-be9e-d2b5cc539ded container configmap-volume-test: STEP: delete the pod Apr 4 17:22:55.845: INFO: Waiting for pod pod-configmaps-5eb64946-7a38-4d3f-be9e-d2b5cc539ded to disappear Apr 4 17:22:55.855: INFO: Pod pod-configmaps-5eb64946-7a38-4d3f-be9e-d2b5cc539ded no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 17:22:55.855: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5627" for this suite. • [SLOW TEST:10.609 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":281,"completed":13,"skipped":159,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 17:22:55.861: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-map-060a5d1b-f898-4a58-b4cb-8e7b7f2e0fd9 STEP: Creating a pod to test consume secrets Apr 4 17:22:55.916: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-6c674541-bb15-4509-ac31-f1d63df2db19" in namespace "projected-1330" to be "Succeeded or Failed" Apr 4 17:22:55.958: INFO: Pod "pod-projected-secrets-6c674541-bb15-4509-ac31-f1d63df2db19": Phase="Pending", Reason="", readiness=false. Elapsed: 41.762235ms Apr 4 17:22:57.961: INFO: Pod "pod-projected-secrets-6c674541-bb15-4509-ac31-f1d63df2db19": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044746675s Apr 4 17:22:59.964: INFO: Pod "pod-projected-secrets-6c674541-bb15-4509-ac31-f1d63df2db19": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047879945s Apr 4 17:23:02.035: INFO: Pod "pod-projected-secrets-6c674541-bb15-4509-ac31-f1d63df2db19": Phase="Pending", Reason="", readiness=false. Elapsed: 6.118770307s Apr 4 17:23:04.039: INFO: Pod "pod-projected-secrets-6c674541-bb15-4509-ac31-f1d63df2db19": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.122090798s STEP: Saw pod success Apr 4 17:23:04.039: INFO: Pod "pod-projected-secrets-6c674541-bb15-4509-ac31-f1d63df2db19" satisfied condition "Succeeded or Failed" Apr 4 17:23:04.041: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-6c674541-bb15-4509-ac31-f1d63df2db19 container projected-secret-volume-test: STEP: delete the pod Apr 4 17:23:04.073: INFO: Waiting for pod pod-projected-secrets-6c674541-bb15-4509-ac31-f1d63df2db19 to disappear Apr 4 17:23:04.083: INFO: Pod pod-projected-secrets-6c674541-bb15-4509-ac31-f1d63df2db19 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 17:23:04.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1330" for this suite. • [SLOW TEST:8.227 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":281,"completed":14,"skipped":168,"failed":0} SSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 17:23:04.089: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-configmap-wk98 STEP: Creating a pod to test atomic-volume-subpath Apr 4 17:23:04.599: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-wk98" in namespace "subpath-499" to be "Succeeded or Failed" Apr 4 17:23:04.601: INFO: Pod "pod-subpath-test-configmap-wk98": Phase="Pending", Reason="", readiness=false. Elapsed: 1.85545ms Apr 4 17:23:06.952: INFO: Pod "pod-subpath-test-configmap-wk98": Phase="Pending", Reason="", readiness=false. Elapsed: 2.352794244s Apr 4 17:23:08.955: INFO: Pod "pod-subpath-test-configmap-wk98": Phase="Pending", Reason="", readiness=false. Elapsed: 4.356087472s Apr 4 17:23:10.959: INFO: Pod "pod-subpath-test-configmap-wk98": Phase="Pending", Reason="", readiness=false. Elapsed: 6.359652273s Apr 4 17:23:12.962: INFO: Pod "pod-subpath-test-configmap-wk98": Phase="Running", Reason="", readiness=true. Elapsed: 8.363372612s Apr 4 17:23:14.966: INFO: Pod "pod-subpath-test-configmap-wk98": Phase="Running", Reason="", readiness=true. Elapsed: 10.366935001s Apr 4 17:23:16.977: INFO: Pod "pod-subpath-test-configmap-wk98": Phase="Running", Reason="", readiness=true. Elapsed: 12.377827783s Apr 4 17:23:18.979: INFO: Pod "pod-subpath-test-configmap-wk98": Phase="Running", Reason="", readiness=true. Elapsed: 14.380105709s Apr 4 17:23:20.983: INFO: Pod "pod-subpath-test-configmap-wk98": Phase="Running", Reason="", readiness=true. Elapsed: 16.383904439s Apr 4 17:23:22.988: INFO: Pod "pod-subpath-test-configmap-wk98": Phase="Running", Reason="", readiness=true. Elapsed: 18.389169377s Apr 4 17:23:24.992: INFO: Pod "pod-subpath-test-configmap-wk98": Phase="Running", Reason="", readiness=true. Elapsed: 20.393035121s Apr 4 17:23:26.995: INFO: Pod "pod-subpath-test-configmap-wk98": Phase="Running", Reason="", readiness=true. Elapsed: 22.396461003s Apr 4 17:23:28.999: INFO: Pod "pod-subpath-test-configmap-wk98": Phase="Running", Reason="", readiness=true. Elapsed: 24.400250521s Apr 4 17:23:31.003: INFO: Pod "pod-subpath-test-configmap-wk98": Phase="Running", Reason="", readiness=true. Elapsed: 26.403820337s Apr 4 17:23:33.015: INFO: Pod "pod-subpath-test-configmap-wk98": Phase="Running", Reason="", readiness=true. Elapsed: 28.41574241s Apr 4 17:23:35.451: INFO: Pod "pod-subpath-test-configmap-wk98": Phase="Running", Reason="", readiness=true. Elapsed: 30.85208426s Apr 4 17:23:37.458: INFO: Pod "pod-subpath-test-configmap-wk98": Phase="Running", Reason="", readiness=true. Elapsed: 32.859335896s Apr 4 17:23:39.461: INFO: Pod "pod-subpath-test-configmap-wk98": Phase="Succeeded", Reason="", readiness=false. Elapsed: 34.862398335s STEP: Saw pod success Apr 4 17:23:39.461: INFO: Pod "pod-subpath-test-configmap-wk98" satisfied condition "Succeeded or Failed" Apr 4 17:23:39.463: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-configmap-wk98 container test-container-subpath-configmap-wk98: STEP: delete the pod Apr 4 17:23:39.960: INFO: Waiting for pod pod-subpath-test-configmap-wk98 to disappear Apr 4 17:23:39.973: INFO: Pod pod-subpath-test-configmap-wk98 no longer exists STEP: Deleting pod pod-subpath-test-configmap-wk98 Apr 4 17:23:39.973: INFO: Deleting pod "pod-subpath-test-configmap-wk98" in namespace "subpath-499" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 17:23:39.974: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-499" for this suite. • [SLOW TEST:36.062 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":281,"completed":15,"skipped":173,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 17:23:40.151: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 4 17:23:45.090: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 17:23:45.150: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-5809" for this suite. • [SLOW TEST:5.005 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:134 should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":281,"completed":16,"skipped":217,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 17:23:45.157: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod busybox-4a10d093-45d3-41f3-a25f-a4c8b6b9fd76 in namespace container-probe-7956 Apr 4 17:23:49.278: INFO: Started pod busybox-4a10d093-45d3-41f3-a25f-a4c8b6b9fd76 in namespace container-probe-7956 STEP: checking the pod's current state and verifying that restartCount is present Apr 4 17:23:49.280: INFO: Initial restart count of pod busybox-4a10d093-45d3-41f3-a25f-a4c8b6b9fd76 is 0 Apr 4 17:24:37.394: INFO: Restart count of pod container-probe-7956/busybox-4a10d093-45d3-41f3-a25f-a4c8b6b9fd76 is now 1 (48.114079635s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 17:24:37.404: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7956" for this suite. • [SLOW TEST:52.294 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":281,"completed":17,"skipped":237,"failed":0} [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 17:24:37.452: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 17:24:53.674: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2216" for this suite. • [SLOW TEST:16.232 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":281,"completed":18,"skipped":237,"failed":0} S ------------------------------ [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 17:24:53.683: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Apr 4 17:24:53.755: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-1e48a296-585c-42eb-a952-f36d87f2914a" in namespace "security-context-test-6662" to be "Succeeded or Failed" Apr 4 17:24:53.797: INFO: Pod "busybox-readonly-false-1e48a296-585c-42eb-a952-f36d87f2914a": Phase="Pending", Reason="", readiness=false. Elapsed: 42.355564ms Apr 4 17:24:55.801: INFO: Pod "busybox-readonly-false-1e48a296-585c-42eb-a952-f36d87f2914a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046251541s Apr 4 17:24:57.805: INFO: Pod "busybox-readonly-false-1e48a296-585c-42eb-a952-f36d87f2914a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.050039482s Apr 4 17:24:57.805: INFO: Pod "busybox-readonly-false-1e48a296-585c-42eb-a952-f36d87f2914a" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 17:24:57.805: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-6662" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":281,"completed":19,"skipped":238,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 17:24:57.814: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0404 17:25:07.897044 7 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 4 17:25:07.897: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 17:25:07.897: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-5391" for this suite. • [SLOW TEST:10.089 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":281,"completed":20,"skipped":257,"failed":0} SSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 17:25:07.903: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Apr 4 17:25:08.022: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d989f460-75f8-43ec-b142-d6c3be8a62e6" in namespace "downward-api-3650" to be "Succeeded or Failed" Apr 4 17:25:08.038: INFO: Pod "downwardapi-volume-d989f460-75f8-43ec-b142-d6c3be8a62e6": Phase="Pending", Reason="", readiness=false. Elapsed: 15.551965ms Apr 4 17:25:10.042: INFO: Pod "downwardapi-volume-d989f460-75f8-43ec-b142-d6c3be8a62e6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019700842s Apr 4 17:25:12.047: INFO: Pod "downwardapi-volume-d989f460-75f8-43ec-b142-d6c3be8a62e6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024725472s STEP: Saw pod success Apr 4 17:25:12.047: INFO: Pod "downwardapi-volume-d989f460-75f8-43ec-b142-d6c3be8a62e6" satisfied condition "Succeeded or Failed" Apr 4 17:25:12.051: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-d989f460-75f8-43ec-b142-d6c3be8a62e6 container client-container: STEP: delete the pod Apr 4 17:25:12.123: INFO: Waiting for pod downwardapi-volume-d989f460-75f8-43ec-b142-d6c3be8a62e6 to disappear Apr 4 17:25:12.152: INFO: Pod downwardapi-volume-d989f460-75f8-43ec-b142-d6c3be8a62e6 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 17:25:12.152: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3650" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":281,"completed":21,"skipped":260,"failed":0} ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 17:25:12.159: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:180 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Apr 4 17:25:12.292: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 17:25:23.004: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5513" for this suite. • [SLOW TEST:10.852 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":281,"completed":22,"skipped":260,"failed":0} S ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 17:25:23.012: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:180 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Apr 4 17:25:23.052: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 17:25:27.198: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1339" for this suite. •{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":281,"completed":23,"skipped":261,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 17:25:27.207: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 4 17:25:28.020: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 4 17:25:30.029: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721617928, loc:(*time.Location)(0x7bcb460)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721617928, loc:(*time.Location)(0x7bcb460)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721617928, loc:(*time.Location)(0x7bcb460)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721617927, loc:(*time.Location)(0x7bcb460)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 4 17:25:33.064: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 17:25:45.252: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9175" for this suite. STEP: Destroying namespace "webhook-9175-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:18.126 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":281,"completed":24,"skipped":299,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 17:25:45.333: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 17:25:49.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-1491" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":281,"completed":25,"skipped":327,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 17:25:49.444: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod with failed condition STEP: updating the pod Apr 4 17:27:50.060: INFO: Successfully updated pod "var-expansion-534d7462-51cf-4470-9cf7-5af2d97d6a27" STEP: waiting for pod running STEP: deleting the pod gracefully Apr 4 17:27:52.070: INFO: Deleting pod "var-expansion-534d7462-51cf-4470-9cf7-5af2d97d6a27" in namespace "var-expansion-3225" Apr 4 17:27:52.075: INFO: Wait up to 5m0s for pod "var-expansion-534d7462-51cf-4470-9cf7-5af2d97d6a27" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 17:28:34.097: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-3225" for this suite. • [SLOW TEST:164.663 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance]","total":281,"completed":26,"skipped":351,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 17:28:34.107: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 4 17:28:34.589: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 4 17:28:36.600: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721618114, loc:(*time.Location)(0x7bcb460)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721618114, loc:(*time.Location)(0x7bcb460)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721618114, loc:(*time.Location)(0x7bcb460)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721618114, loc:(*time.Location)(0x7bcb460)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 4 17:28:39.629: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Apr 4 17:28:39.640: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 17:28:40.798: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1211" for this suite. STEP: Destroying namespace "webhook-1211-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.752 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":281,"completed":27,"skipped":359,"failed":0} SSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 17:28:40.859: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3461.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-3461.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3461.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-3461.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-3461.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-3461.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-3461.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-3461.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-3461.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-3461.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-3461.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-3461.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3461.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 188.238.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.238.188_udp@PTR;check="$$(dig +tcp +noall +answer +search 188.238.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.238.188_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3461.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-3461.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3461.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-3461.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-3461.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-3461.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-3461.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-3461.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-3461.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-3461.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-3461.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-3461.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3461.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 188.238.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.238.188_udp@PTR;check="$$(dig +tcp +noall +answer +search 188.238.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.238.188_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 4 17:28:47.060: INFO: Unable to read wheezy_udp@dns-test-service.dns-3461.svc.cluster.local from pod dns-3461/dns-test-16a0756c-a5ec-4eec-844e-f8a00accb1f2: the server could not find the requested resource (get pods dns-test-16a0756c-a5ec-4eec-844e-f8a00accb1f2) Apr 4 17:28:47.063: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3461.svc.cluster.local from pod dns-3461/dns-test-16a0756c-a5ec-4eec-844e-f8a00accb1f2: the server could not find the requested resource (get pods dns-test-16a0756c-a5ec-4eec-844e-f8a00accb1f2) Apr 4 17:28:47.066: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3461.svc.cluster.local from pod dns-3461/dns-test-16a0756c-a5ec-4eec-844e-f8a00accb1f2: the server could not find the requested resource (get pods dns-test-16a0756c-a5ec-4eec-844e-f8a00accb1f2) Apr 4 17:28:47.069: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3461.svc.cluster.local from pod dns-3461/dns-test-16a0756c-a5ec-4eec-844e-f8a00accb1f2: the server could not find the requested resource (get pods dns-test-16a0756c-a5ec-4eec-844e-f8a00accb1f2) Apr 4 17:28:47.092: INFO: Unable to read jessie_udp@dns-test-service.dns-3461.svc.cluster.local from pod dns-3461/dns-test-16a0756c-a5ec-4eec-844e-f8a00accb1f2: the server could not find the requested resource (get pods dns-test-16a0756c-a5ec-4eec-844e-f8a00accb1f2) Apr 4 17:28:47.095: INFO: Unable to read jessie_tcp@dns-test-service.dns-3461.svc.cluster.local from pod dns-3461/dns-test-16a0756c-a5ec-4eec-844e-f8a00accb1f2: the server could not find the requested resource (get pods dns-test-16a0756c-a5ec-4eec-844e-f8a00accb1f2) Apr 4 17:28:47.098: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3461.svc.cluster.local from pod dns-3461/dns-test-16a0756c-a5ec-4eec-844e-f8a00accb1f2: the server could not find the requested resource (get pods dns-test-16a0756c-a5ec-4eec-844e-f8a00accb1f2) Apr 4 17:28:47.100: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3461.svc.cluster.local from pod dns-3461/dns-test-16a0756c-a5ec-4eec-844e-f8a00accb1f2: the server could not find the requested resource (get pods dns-test-16a0756c-a5ec-4eec-844e-f8a00accb1f2) Apr 4 17:28:47.116: INFO: Lookups using dns-3461/dns-test-16a0756c-a5ec-4eec-844e-f8a00accb1f2 failed for: [wheezy_udp@dns-test-service.dns-3461.svc.cluster.local wheezy_tcp@dns-test-service.dns-3461.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3461.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3461.svc.cluster.local jessie_udp@dns-test-service.dns-3461.svc.cluster.local jessie_tcp@dns-test-service.dns-3461.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3461.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3461.svc.cluster.local] Apr 4 17:28:52.121: INFO: Unable to read wheezy_udp@dns-test-service.dns-3461.svc.cluster.local from pod dns-3461/dns-test-16a0756c-a5ec-4eec-844e-f8a00accb1f2: the server could not find the requested resource (get pods dns-test-16a0756c-a5ec-4eec-844e-f8a00accb1f2) Apr 4 17:28:52.125: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3461.svc.cluster.local from pod dns-3461/dns-test-16a0756c-a5ec-4eec-844e-f8a00accb1f2: the server could not find the requested resource (get pods dns-test-16a0756c-a5ec-4eec-844e-f8a00accb1f2) Apr 4 17:28:52.129: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3461.svc.cluster.local from pod dns-3461/dns-test-16a0756c-a5ec-4eec-844e-f8a00accb1f2: the server could not find the requested resource (get pods dns-test-16a0756c-a5ec-4eec-844e-f8a00accb1f2) Apr 4 17:28:52.133: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3461.svc.cluster.local from pod dns-3461/dns-test-16a0756c-a5ec-4eec-844e-f8a00accb1f2: the server could not find the requested resource (get pods dns-test-16a0756c-a5ec-4eec-844e-f8a00accb1f2) Apr 4 17:28:52.174: INFO: Unable to read jessie_udp@dns-test-service.dns-3461.svc.cluster.local from pod dns-3461/dns-test-16a0756c-a5ec-4eec-844e-f8a00accb1f2: the server could not find the requested resource (get pods dns-test-16a0756c-a5ec-4eec-844e-f8a00accb1f2) Apr 4 17:28:52.176: INFO: Unable to read jessie_tcp@dns-test-service.dns-3461.svc.cluster.local from pod dns-3461/dns-test-16a0756c-a5ec-4eec-844e-f8a00accb1f2: the server could not find the requested resource (get pods dns-test-16a0756c-a5ec-4eec-844e-f8a00accb1f2) Apr 4 17:28:52.179: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3461.svc.cluster.local from pod dns-3461/dns-test-16a0756c-a5ec-4eec-844e-f8a00accb1f2: the server could not find the requested resource (get pods dns-test-16a0756c-a5ec-4eec-844e-f8a00accb1f2) Apr 4 17:28:52.182: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3461.svc.cluster.local from pod dns-3461/dns-test-16a0756c-a5ec-4eec-844e-f8a00accb1f2: the server could not find the requested resource (get pods dns-test-16a0756c-a5ec-4eec-844e-f8a00accb1f2) Apr 4 17:28:52.197: INFO: Lookups using dns-3461/dns-test-16a0756c-a5ec-4eec-844e-f8a00accb1f2 failed for: [wheezy_udp@dns-test-service.dns-3461.svc.cluster.local wheezy_tcp@dns-test-service.dns-3461.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3461.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3461.svc.cluster.local jessie_udp@dns-test-service.dns-3461.svc.cluster.local jessie_tcp@dns-test-service.dns-3461.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3461.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3461.svc.cluster.local] Apr 4 17:28:57.121: INFO: Unable to read wheezy_udp@dns-test-service.dns-3461.svc.cluster.local from pod dns-3461/dns-test-16a0756c-a5ec-4eec-844e-f8a00accb1f2: the server could not find the requested resource (get pods dns-test-16a0756c-a5ec-4eec-844e-f8a00accb1f2) Apr 4 17:28:57.124: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3461.svc.cluster.local from pod dns-3461/dns-test-16a0756c-a5ec-4eec-844e-f8a00accb1f2: the server could not find the requested resource (get pods dns-test-16a0756c-a5ec-4eec-844e-f8a00accb1f2) Apr 4 17:28:57.128: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3461.svc.cluster.local from pod dns-3461/dns-test-16a0756c-a5ec-4eec-844e-f8a00accb1f2: the server could not find the requested resource (get pods dns-test-16a0756c-a5ec-4eec-844e-f8a00accb1f2) Apr 4 17:28:57.131: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3461.svc.cluster.local from pod dns-3461/dns-test-16a0756c-a5ec-4eec-844e-f8a00accb1f2: the server could not find the requested resource (get pods dns-test-16a0756c-a5ec-4eec-844e-f8a00accb1f2) Apr 4 17:28:57.182: INFO: Unable to read jessie_udp@dns-test-service.dns-3461.svc.cluster.local from pod dns-3461/dns-test-16a0756c-a5ec-4eec-844e-f8a00accb1f2: the server could not find the requested resource (get pods dns-test-16a0756c-a5ec-4eec-844e-f8a00accb1f2) Apr 4 17:28:57.185: INFO: Unable to read jessie_tcp@dns-test-service.dns-3461.svc.cluster.local from pod dns-3461/dns-test-16a0756c-a5ec-4eec-844e-f8a00accb1f2: the server could not find the requested resource (get pods dns-test-16a0756c-a5ec-4eec-844e-f8a00accb1f2) Apr 4 17:28:57.187: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3461.svc.cluster.local from pod dns-3461/dns-test-16a0756c-a5ec-4eec-844e-f8a00accb1f2: the server could not find the requested resource (get pods dns-test-16a0756c-a5ec-4eec-844e-f8a00accb1f2) Apr 4 17:28:57.189: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3461.svc.cluster.local from pod dns-3461/dns-test-16a0756c-a5ec-4eec-844e-f8a00accb1f2: the server could not find the requested resource (get pods dns-test-16a0756c-a5ec-4eec-844e-f8a00accb1f2) Apr 4 17:28:57.203: INFO: Lookups using dns-3461/dns-test-16a0756c-a5ec-4eec-844e-f8a00accb1f2 failed for: [wheezy_udp@dns-test-service.dns-3461.svc.cluster.local wheezy_tcp@dns-test-service.dns-3461.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3461.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3461.svc.cluster.local jessie_udp@dns-test-service.dns-3461.svc.cluster.local jessie_tcp@dns-test-service.dns-3461.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3461.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3461.svc.cluster.local] Apr 4 17:29:02.138: INFO: Unable to read wheezy_udp@dns-test-service.dns-3461.svc.cluster.local from pod dns-3461/dns-test-16a0756c-a5ec-4eec-844e-f8a00accb1f2: the server could not find the requested resource (get pods dns-test-16a0756c-a5ec-4eec-844e-f8a00accb1f2) Apr 4 17:29:02.146: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3461.svc.cluster.local from pod dns-3461/dns-test-16a0756c-a5ec-4eec-844e-f8a00accb1f2: the server could not find the requested resource (get pods dns-test-16a0756c-a5ec-4eec-844e-f8a00accb1f2) Apr 4 17:29:02.149: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3461.svc.cluster.local from pod dns-3461/dns-test-16a0756c-a5ec-4eec-844e-f8a00accb1f2: the server could not find the requested resource (get pods dns-test-16a0756c-a5ec-4eec-844e-f8a00accb1f2) Apr 4 17:29:02.152: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3461.svc.cluster.local from pod dns-3461/dns-test-16a0756c-a5ec-4eec-844e-f8a00accb1f2: the server could not find the requested resource (get pods dns-test-16a0756c-a5ec-4eec-844e-f8a00accb1f2) Apr 4 17:29:02.173: INFO: Unable to read jessie_udp@dns-test-service.dns-3461.svc.cluster.local from pod dns-3461/dns-test-16a0756c-a5ec-4eec-844e-f8a00accb1f2: the server could not find the requested resource (get pods dns-test-16a0756c-a5ec-4eec-844e-f8a00accb1f2) Apr 4 17:29:02.176: INFO: Unable to read jessie_tcp@dns-test-service.dns-3461.svc.cluster.local from pod dns-3461/dns-test-16a0756c-a5ec-4eec-844e-f8a00accb1f2: the server could not find the requested resource (get pods dns-test-16a0756c-a5ec-4eec-844e-f8a00accb1f2) Apr 4 17:29:02.179: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3461.svc.cluster.local from pod dns-3461/dns-test-16a0756c-a5ec-4eec-844e-f8a00accb1f2: the server could not find the requested resource (get pods dns-test-16a0756c-a5ec-4eec-844e-f8a00accb1f2) Apr 4 17:29:02.182: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3461.svc.cluster.local from pod dns-3461/dns-test-16a0756c-a5ec-4eec-844e-f8a00accb1f2: the server could not find the requested resource (get pods dns-test-16a0756c-a5ec-4eec-844e-f8a00accb1f2) Apr 4 17:29:02.201: INFO: Lookups using dns-3461/dns-test-16a0756c-a5ec-4eec-844e-f8a00accb1f2 failed for: [wheezy_udp@dns-test-service.dns-3461.svc.cluster.local wheezy_tcp@dns-test-service.dns-3461.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3461.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3461.svc.cluster.local jessie_udp@dns-test-service.dns-3461.svc.cluster.local jessie_tcp@dns-test-service.dns-3461.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3461.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3461.svc.cluster.local] Apr 4 17:29:07.120: INFO: Unable to read wheezy_udp@dns-test-service.dns-3461.svc.cluster.local from pod dns-3461/dns-test-16a0756c-a5ec-4eec-844e-f8a00accb1f2: the server could not find the requested resource (get pods dns-test-16a0756c-a5ec-4eec-844e-f8a00accb1f2) Apr 4 17:29:07.124: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3461.svc.cluster.local from pod dns-3461/dns-test-16a0756c-a5ec-4eec-844e-f8a00accb1f2: the server could not find the requested resource (get pods dns-test-16a0756c-a5ec-4eec-844e-f8a00accb1f2) Apr 4 17:29:07.127: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3461.svc.cluster.local from pod dns-3461/dns-test-16a0756c-a5ec-4eec-844e-f8a00accb1f2: the server could not find the requested resource (get pods dns-test-16a0756c-a5ec-4eec-844e-f8a00accb1f2) Apr 4 17:29:07.130: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3461.svc.cluster.local from pod dns-3461/dns-test-16a0756c-a5ec-4eec-844e-f8a00accb1f2: the server could not find the requested resource (get pods dns-test-16a0756c-a5ec-4eec-844e-f8a00accb1f2) Apr 4 17:29:07.150: INFO: Unable to read jessie_udp@dns-test-service.dns-3461.svc.cluster.local from pod dns-3461/dns-test-16a0756c-a5ec-4eec-844e-f8a00accb1f2: the server could not find the requested resource (get pods dns-test-16a0756c-a5ec-4eec-844e-f8a00accb1f2) Apr 4 17:29:07.152: INFO: Unable to read jessie_tcp@dns-test-service.dns-3461.svc.cluster.local from pod dns-3461/dns-test-16a0756c-a5ec-4eec-844e-f8a00accb1f2: the server could not find the requested resource (get pods dns-test-16a0756c-a5ec-4eec-844e-f8a00accb1f2) Apr 4 17:29:07.164: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3461.svc.cluster.local from pod dns-3461/dns-test-16a0756c-a5ec-4eec-844e-f8a00accb1f2: the server could not find the requested resource (get pods dns-test-16a0756c-a5ec-4eec-844e-f8a00accb1f2) Apr 4 17:29:07.167: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3461.svc.cluster.local from pod dns-3461/dns-test-16a0756c-a5ec-4eec-844e-f8a00accb1f2: the server could not find the requested resource (get pods dns-test-16a0756c-a5ec-4eec-844e-f8a00accb1f2) Apr 4 17:29:07.184: INFO: Lookups using dns-3461/dns-test-16a0756c-a5ec-4eec-844e-f8a00accb1f2 failed for: [wheezy_udp@dns-test-service.dns-3461.svc.cluster.local wheezy_tcp@dns-test-service.dns-3461.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3461.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3461.svc.cluster.local jessie_udp@dns-test-service.dns-3461.svc.cluster.local jessie_tcp@dns-test-service.dns-3461.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3461.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3461.svc.cluster.local] Apr 4 17:29:12.121: INFO: Unable to read wheezy_udp@dns-test-service.dns-3461.svc.cluster.local from pod dns-3461/dns-test-16a0756c-a5ec-4eec-844e-f8a00accb1f2: the server could not find the requested resource (get pods dns-test-16a0756c-a5ec-4eec-844e-f8a00accb1f2) Apr 4 17:29:12.125: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3461.svc.cluster.local from pod dns-3461/dns-test-16a0756c-a5ec-4eec-844e-f8a00accb1f2: the server could not find the requested resource (get pods dns-test-16a0756c-a5ec-4eec-844e-f8a00accb1f2) Apr 4 17:29:12.129: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3461.svc.cluster.local from pod dns-3461/dns-test-16a0756c-a5ec-4eec-844e-f8a00accb1f2: the server could not find the requested resource (get pods dns-test-16a0756c-a5ec-4eec-844e-f8a00accb1f2) Apr 4 17:29:12.132: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3461.svc.cluster.local from pod dns-3461/dns-test-16a0756c-a5ec-4eec-844e-f8a00accb1f2: the server could not find the requested resource (get pods dns-test-16a0756c-a5ec-4eec-844e-f8a00accb1f2) Apr 4 17:29:12.154: INFO: Unable to read jessie_udp@dns-test-service.dns-3461.svc.cluster.local from pod dns-3461/dns-test-16a0756c-a5ec-4eec-844e-f8a00accb1f2: the server could not find the requested resource (get pods dns-test-16a0756c-a5ec-4eec-844e-f8a00accb1f2) Apr 4 17:29:12.157: INFO: Unable to read jessie_tcp@dns-test-service.dns-3461.svc.cluster.local from pod dns-3461/dns-test-16a0756c-a5ec-4eec-844e-f8a00accb1f2: the server could not find the requested resource (get pods dns-test-16a0756c-a5ec-4eec-844e-f8a00accb1f2) Apr 4 17:29:12.159: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3461.svc.cluster.local from pod dns-3461/dns-test-16a0756c-a5ec-4eec-844e-f8a00accb1f2: the server could not find the requested resource (get pods dns-test-16a0756c-a5ec-4eec-844e-f8a00accb1f2) Apr 4 17:29:12.161: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3461.svc.cluster.local from pod dns-3461/dns-test-16a0756c-a5ec-4eec-844e-f8a00accb1f2: the server could not find the requested resource (get pods dns-test-16a0756c-a5ec-4eec-844e-f8a00accb1f2) Apr 4 17:29:12.180: INFO: Lookups using dns-3461/dns-test-16a0756c-a5ec-4eec-844e-f8a00accb1f2 failed for: [wheezy_udp@dns-test-service.dns-3461.svc.cluster.local wheezy_tcp@dns-test-service.dns-3461.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3461.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3461.svc.cluster.local jessie_udp@dns-test-service.dns-3461.svc.cluster.local jessie_tcp@dns-test-service.dns-3461.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3461.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3461.svc.cluster.local] Apr 4 17:29:17.185: INFO: DNS probes using dns-3461/dns-test-16a0756c-a5ec-4eec-844e-f8a00accb1f2 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 17:29:17.794: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3461" for this suite. • [SLOW TEST:36.963 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":281,"completed":28,"skipped":363,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 17:29:17.823: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Apr 4 17:29:17.909: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 17:29:18.934: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-6132" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":281,"completed":29,"skipped":372,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 17:29:18.944: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:249 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating all guestbook components Apr 4 17:29:18.989: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-slave labels: app: agnhost role: slave tier: backend spec: ports: - port: 6379 selector: app: agnhost role: slave tier: backend Apr 4 17:29:18.989: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1748' Apr 4 17:29:19.334: INFO: stderr: "" Apr 4 17:29:19.334: INFO: stdout: "service/agnhost-slave created\n" Apr 4 17:29:19.335: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-master labels: app: agnhost role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: master tier: backend Apr 4 17:29:19.335: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1748' Apr 4 17:29:19.812: INFO: stderr: "" Apr 4 17:29:19.812: INFO: stdout: "service/agnhost-master created\n" Apr 4 17:29:19.812: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Apr 4 17:29:19.812: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1748' Apr 4 17:29:20.221: INFO: stderr: "" Apr 4 17:29:20.221: INFO: stdout: "service/frontend created\n" Apr 4 17:29:20.221: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 Apr 4 17:29:20.221: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1748' Apr 4 17:29:20.596: INFO: stderr: "" Apr 4 17:29:20.596: INFO: stdout: "deployment.apps/frontend created\n" Apr 4 17:29:20.596: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-master spec: replicas: 1 selector: matchLabels: app: agnhost role: master tier: backend template: metadata: labels: app: agnhost role: master tier: backend spec: containers: - name: master image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Apr 4 17:29:20.596: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1748' Apr 4 17:29:20.947: INFO: stderr: "" Apr 4 17:29:20.947: INFO: stdout: "deployment.apps/agnhost-master created\n" Apr 4 17:29:20.947: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-slave spec: replicas: 2 selector: matchLabels: app: agnhost role: slave tier: backend template: metadata: labels: app: agnhost role: slave tier: backend spec: containers: - name: slave image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Apr 4 17:29:20.947: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1748' Apr 4 17:29:21.205: INFO: stderr: "" Apr 4 17:29:21.205: INFO: stdout: "deployment.apps/agnhost-slave created\n" STEP: validating guestbook app Apr 4 17:29:21.205: INFO: Waiting for all frontend pods to be Running. Apr 4 17:29:31.255: INFO: Waiting for frontend to serve content. Apr 4 17:29:31.266: INFO: Trying to add a new entry to the guestbook. Apr 4 17:29:31.276: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Apr 4 17:29:31.283: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1748' Apr 4 17:29:31.442: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 4 17:29:31.442: INFO: stdout: "service \"agnhost-slave\" force deleted\n" STEP: using delete to clean up resources Apr 4 17:29:31.442: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1748' Apr 4 17:29:31.613: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 4 17:29:31.613: INFO: stdout: "service \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources Apr 4 17:29:31.613: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1748' Apr 4 17:29:31.876: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 4 17:29:31.876: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Apr 4 17:29:31.876: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1748' Apr 4 17:29:31.989: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 4 17:29:31.989: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Apr 4 17:29:31.990: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1748' Apr 4 17:29:32.101: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 4 17:29:32.101: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources Apr 4 17:29:32.101: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1748' Apr 4 17:29:32.199: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 4 17:29:32.199: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 17:29:32.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1748" for this suite. • [SLOW TEST:13.270 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:340 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":281,"completed":30,"skipped":394,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 17:29:32.214: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0404 17:29:33.192943 7 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 4 17:29:33.192: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 17:29:33.193: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3760" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":281,"completed":31,"skipped":430,"failed":0} SSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 17:29:33.199: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:180 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Apr 4 17:29:41.136: INFO: Successfully updated pod "pod-update-894c706b-a700-4d49-88ec-7139f9c9a39b" STEP: verifying the updated pod is in kubernetes Apr 4 17:29:41.156: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 17:29:41.156: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1555" for this suite. • [SLOW TEST:7.972 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":281,"completed":32,"skipped":434,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 17:29:41.171: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Apr 4 17:29:41.628: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3e2a0777-f3af-4d9c-a196-d0cdee6d4e6e" in namespace "downward-api-7317" to be "Succeeded or Failed" Apr 4 17:29:41.647: INFO: Pod "downwardapi-volume-3e2a0777-f3af-4d9c-a196-d0cdee6d4e6e": Phase="Pending", Reason="", readiness=false. Elapsed: 18.798262ms Apr 4 17:29:43.651: INFO: Pod "downwardapi-volume-3e2a0777-f3af-4d9c-a196-d0cdee6d4e6e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022954368s Apr 4 17:29:45.655: INFO: Pod "downwardapi-volume-3e2a0777-f3af-4d9c-a196-d0cdee6d4e6e": Phase="Running", Reason="", readiness=true. Elapsed: 4.026605819s Apr 4 17:29:47.659: INFO: Pod "downwardapi-volume-3e2a0777-f3af-4d9c-a196-d0cdee6d4e6e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.030762523s STEP: Saw pod success Apr 4 17:29:47.659: INFO: Pod "downwardapi-volume-3e2a0777-f3af-4d9c-a196-d0cdee6d4e6e" satisfied condition "Succeeded or Failed" Apr 4 17:29:47.662: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-3e2a0777-f3af-4d9c-a196-d0cdee6d4e6e container client-container: STEP: delete the pod Apr 4 17:29:47.706: INFO: Waiting for pod downwardapi-volume-3e2a0777-f3af-4d9c-a196-d0cdee6d4e6e to disappear Apr 4 17:29:47.710: INFO: Pod downwardapi-volume-3e2a0777-f3af-4d9c-a196-d0cdee6d4e6e no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 17:29:47.710: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7317" for this suite. • [SLOW TEST:6.544 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":281,"completed":33,"skipped":442,"failed":0} [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 17:29:47.715: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename limitrange STEP: Waiting for a default service account to be provisioned in namespace [It] should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a LimitRange STEP: Setting up watch STEP: Submitting a LimitRange Apr 4 17:29:47.765: INFO: observed the limitRanges list STEP: Verifying LimitRange creation was observed STEP: Fetching the LimitRange to ensure it has proper values Apr 4 17:29:47.796: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] Apr 4 17:29:47.796: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with no resource requirements STEP: Ensuring Pod has resource requirements applied from LimitRange Apr 4 17:29:47.806: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] Apr 4 17:29:47.807: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with partial resource requirements STEP: Ensuring Pod has merged resource requirements applied from LimitRange Apr 4 17:29:47.861: INFO: Verifying requests: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] Apr 4 17:29:47.861: INFO: Verifying limits: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Failing to create a Pod with less than min resources STEP: Failing to create a Pod with more than max resources STEP: Updating a LimitRange STEP: Verifying LimitRange updating is effective STEP: Creating a Pod with less than former min resources STEP: Failing to create a Pod with more than max resources STEP: Deleting a LimitRange STEP: Verifying the LimitRange was deleted Apr 4 17:29:55.389: INFO: limitRange is already deleted STEP: Creating a Pod with more than former max resources [AfterEach] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 17:29:55.396: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "limitrange-5633" for this suite. • [SLOW TEST:7.719 seconds] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":281,"completed":34,"skipped":442,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 17:29:55.435: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Apr 4 17:30:16.040: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 4 17:30:16.046: INFO: Pod pod-with-poststart-http-hook still exists Apr 4 17:30:18.046: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 4 17:30:18.060: INFO: Pod pod-with-poststart-http-hook still exists Apr 4 17:30:20.046: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 4 17:30:20.058: INFO: Pod pod-with-poststart-http-hook still exists Apr 4 17:30:22.046: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 4 17:30:22.050: INFO: Pod pod-with-poststart-http-hook still exists Apr 4 17:30:24.046: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 4 17:30:24.050: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 17:30:24.050: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-7644" for this suite. • [SLOW TEST:28.622 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":281,"completed":35,"skipped":465,"failed":0} [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 17:30:24.058: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:52 [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Apr 4 17:30:24.215: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Apr 4 17:30:26.284: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 17:30:27.330: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-3009" for this suite. •{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":281,"completed":36,"skipped":465,"failed":0} SSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 17:30:27.410: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-2892, will wait for the garbage collector to delete the pods Apr 4 17:30:34.056: INFO: Deleting Job.batch foo took: 6.778693ms Apr 4 17:30:34.156: INFO: Terminating Job.batch foo pods took: 100.200703ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 17:31:13.060: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-2892" for this suite. • [SLOW TEST:45.659 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":281,"completed":37,"skipped":471,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 17:31:13.069: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Apr 4 17:31:21.229: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 4 17:31:21.233: INFO: Pod pod-with-poststart-exec-hook still exists Apr 4 17:31:23.233: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 4 17:31:23.237: INFO: Pod pod-with-poststart-exec-hook still exists Apr 4 17:31:25.233: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 4 17:31:25.237: INFO: Pod pod-with-poststart-exec-hook still exists Apr 4 17:31:27.233: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 4 17:31:27.237: INFO: Pod pod-with-poststart-exec-hook still exists Apr 4 17:31:29.233: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 4 17:31:29.237: INFO: Pod pod-with-poststart-exec-hook still exists Apr 4 17:31:31.233: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 4 17:31:31.237: INFO: Pod pod-with-poststart-exec-hook still exists Apr 4 17:31:33.233: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 4 17:31:33.237: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 17:31:33.237: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-2690" for this suite. • [SLOW TEST:20.177 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":281,"completed":38,"skipped":489,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 17:31:33.248: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Apr 4 17:31:33.743: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Apr 4 17:31:35.752: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721618293, loc:(*time.Location)(0x7bcb460)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721618293, loc:(*time.Location)(0x7bcb460)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721618293, loc:(*time.Location)(0x7bcb460)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721618293, loc:(*time.Location)(0x7bcb460)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-54c8b67c75\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 4 17:31:38.767: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Apr 4 17:31:38.770: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 17:31:40.041: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-4977" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:6.891 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":281,"completed":39,"skipped":558,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 17:31:40.140: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Apr 4 17:31:40.206: INFO: Waiting up to 5m0s for pod "downwardapi-volume-df667006-31a5-4629-b4bd-09123baa96ef" in namespace "projected-614" to be "Succeeded or Failed" Apr 4 17:31:40.210: INFO: Pod "downwardapi-volume-df667006-31a5-4629-b4bd-09123baa96ef": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024311ms Apr 4 17:31:42.213: INFO: Pod "downwardapi-volume-df667006-31a5-4629-b4bd-09123baa96ef": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007530824s Apr 4 17:31:44.217: INFO: Pod "downwardapi-volume-df667006-31a5-4629-b4bd-09123baa96ef": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011356823s STEP: Saw pod success Apr 4 17:31:44.217: INFO: Pod "downwardapi-volume-df667006-31a5-4629-b4bd-09123baa96ef" satisfied condition "Succeeded or Failed" Apr 4 17:31:44.220: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-df667006-31a5-4629-b4bd-09123baa96ef container client-container: STEP: delete the pod Apr 4 17:31:44.263: INFO: Waiting for pod downwardapi-volume-df667006-31a5-4629-b4bd-09123baa96ef to disappear Apr 4 17:31:44.269: INFO: Pod downwardapi-volume-df667006-31a5-4629-b4bd-09123baa96ef no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 17:31:44.270: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-614" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":281,"completed":40,"skipped":611,"failed":0} ------------------------------ [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 17:31:44.277: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:249 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:301 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a replication controller Apr 4 17:31:44.311: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6013' Apr 4 17:31:44.624: INFO: stderr: "" Apr 4 17:31:44.624: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 4 17:31:44.624: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6013' Apr 4 17:31:44.733: INFO: stderr: "" Apr 4 17:31:44.733: INFO: stdout: "update-demo-nautilus-955pv update-demo-nautilus-jjjk8 " Apr 4 17:31:44.733: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-955pv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6013' Apr 4 17:31:44.822: INFO: stderr: "" Apr 4 17:31:44.822: INFO: stdout: "" Apr 4 17:31:44.822: INFO: update-demo-nautilus-955pv is created but not running Apr 4 17:31:49.822: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6013' Apr 4 17:31:52.348: INFO: stderr: "" Apr 4 17:31:52.348: INFO: stdout: "update-demo-nautilus-955pv update-demo-nautilus-jjjk8 " Apr 4 17:31:52.348: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-955pv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6013' Apr 4 17:31:52.437: INFO: stderr: "" Apr 4 17:31:52.437: INFO: stdout: "true" Apr 4 17:31:52.437: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-955pv -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6013' Apr 4 17:31:52.525: INFO: stderr: "" Apr 4 17:31:52.525: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 4 17:31:52.525: INFO: validating pod update-demo-nautilus-955pv Apr 4 17:31:52.529: INFO: got data: { "image": "nautilus.jpg" } Apr 4 17:31:52.529: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 4 17:31:52.529: INFO: update-demo-nautilus-955pv is verified up and running Apr 4 17:31:52.529: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jjjk8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6013' Apr 4 17:31:52.625: INFO: stderr: "" Apr 4 17:31:52.625: INFO: stdout: "true" Apr 4 17:31:52.625: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jjjk8 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6013' Apr 4 17:31:52.715: INFO: stderr: "" Apr 4 17:31:52.715: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 4 17:31:52.715: INFO: validating pod update-demo-nautilus-jjjk8 Apr 4 17:31:52.719: INFO: got data: { "image": "nautilus.jpg" } Apr 4 17:31:52.719: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 4 17:31:52.719: INFO: update-demo-nautilus-jjjk8 is verified up and running STEP: using delete to clean up resources Apr 4 17:31:52.719: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6013' Apr 4 17:31:52.818: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 4 17:31:52.818: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Apr 4 17:31:52.818: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-6013' Apr 4 17:31:52.914: INFO: stderr: "No resources found in kubectl-6013 namespace.\n" Apr 4 17:31:52.914: INFO: stdout: "" Apr 4 17:31:52.914: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-6013 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 4 17:31:53.017: INFO: stderr: "" Apr 4 17:31:53.017: INFO: stdout: "update-demo-nautilus-955pv\nupdate-demo-nautilus-jjjk8\n" Apr 4 17:31:53.517: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-6013' Apr 4 17:31:53.615: INFO: stderr: "No resources found in kubectl-6013 namespace.\n" Apr 4 17:31:53.615: INFO: stdout: "" Apr 4 17:31:53.615: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-6013 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 4 17:31:53.717: INFO: stderr: "" Apr 4 17:31:53.717: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 17:31:53.718: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6013" for this suite. • [SLOW TEST:9.448 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:299 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":281,"completed":41,"skipped":611,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should find a service from listing all namespaces [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 17:31:53.726: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should find a service from listing all namespaces [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: fetching services [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 17:31:53.773: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1284" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 •{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":281,"completed":42,"skipped":638,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 17:31:53.780: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-map-d1e6b965-e801-493f-9fc3-443d409c6572 STEP: Creating a pod to test consume configMaps Apr 4 17:31:54.064: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ceff2753-02e7-4d47-9c02-7c5e83fc0481" in namespace "projected-713" to be "Succeeded or Failed" Apr 4 17:31:54.067: INFO: Pod "pod-projected-configmaps-ceff2753-02e7-4d47-9c02-7c5e83fc0481": Phase="Pending", Reason="", readiness=false. Elapsed: 3.443416ms Apr 4 17:31:56.071: INFO: Pod "pod-projected-configmaps-ceff2753-02e7-4d47-9c02-7c5e83fc0481": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007498425s Apr 4 17:31:58.074: INFO: Pod "pod-projected-configmaps-ceff2753-02e7-4d47-9c02-7c5e83fc0481": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010762061s STEP: Saw pod success Apr 4 17:31:58.075: INFO: Pod "pod-projected-configmaps-ceff2753-02e7-4d47-9c02-7c5e83fc0481" satisfied condition "Succeeded or Failed" Apr 4 17:31:58.077: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-ceff2753-02e7-4d47-9c02-7c5e83fc0481 container projected-configmap-volume-test: STEP: delete the pod Apr 4 17:31:58.092: INFO: Waiting for pod pod-projected-configmaps-ceff2753-02e7-4d47-9c02-7c5e83fc0481 to disappear Apr 4 17:31:58.096: INFO: Pod pod-projected-configmaps-ceff2753-02e7-4d47-9c02-7c5e83fc0481 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 17:31:58.096: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-713" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":281,"completed":43,"skipped":651,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 17:31:58.104: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-6e2122ca-095f-4582-9446-cec23219f999 STEP: Creating a pod to test consume configMaps Apr 4 17:31:58.224: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-bfd259e2-8207-4799-b5b5-7f0c6f142baf" in namespace "projected-7801" to be "Succeeded or Failed" Apr 4 17:31:58.244: INFO: Pod "pod-projected-configmaps-bfd259e2-8207-4799-b5b5-7f0c6f142baf": Phase="Pending", Reason="", readiness=false. Elapsed: 19.830671ms Apr 4 17:32:00.248: INFO: Pod "pod-projected-configmaps-bfd259e2-8207-4799-b5b5-7f0c6f142baf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023677134s Apr 4 17:32:02.252: INFO: Pod "pod-projected-configmaps-bfd259e2-8207-4799-b5b5-7f0c6f142baf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027787545s STEP: Saw pod success Apr 4 17:32:02.252: INFO: Pod "pod-projected-configmaps-bfd259e2-8207-4799-b5b5-7f0c6f142baf" satisfied condition "Succeeded or Failed" Apr 4 17:32:02.256: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-bfd259e2-8207-4799-b5b5-7f0c6f142baf container projected-configmap-volume-test: STEP: delete the pod Apr 4 17:32:02.298: INFO: Waiting for pod pod-projected-configmaps-bfd259e2-8207-4799-b5b5-7f0c6f142baf to disappear Apr 4 17:32:02.330: INFO: Pod pod-projected-configmaps-bfd259e2-8207-4799-b5b5-7f0c6f142baf no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 17:32:02.330: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7801" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":281,"completed":44,"skipped":701,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 17:32:02.338: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:249 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Apr 4 17:32:02.417: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5452' Apr 4 17:32:02.668: INFO: stderr: "" Apr 4 17:32:02.668: INFO: stdout: "replicationcontroller/agnhost-master created\n" Apr 4 17:32:02.668: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5452' Apr 4 17:32:02.949: INFO: stderr: "" Apr 4 17:32:02.949: INFO: stdout: "service/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Apr 4 17:32:03.959: INFO: Selector matched 1 pods for map[app:agnhost] Apr 4 17:32:03.959: INFO: Found 0 / 1 Apr 4 17:32:04.953: INFO: Selector matched 1 pods for map[app:agnhost] Apr 4 17:32:04.953: INFO: Found 0 / 1 Apr 4 17:32:05.954: INFO: Selector matched 1 pods for map[app:agnhost] Apr 4 17:32:05.954: INFO: Found 1 / 1 Apr 4 17:32:05.954: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Apr 4 17:32:05.956: INFO: Selector matched 1 pods for map[app:agnhost] Apr 4 17:32:05.957: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Apr 4 17:32:05.957: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config describe pod agnhost-master-8chlh --namespace=kubectl-5452' Apr 4 17:32:06.075: INFO: stderr: "" Apr 4 17:32:06.075: INFO: stdout: "Name: agnhost-master-8chlh\nNamespace: kubectl-5452\nPriority: 0\nNode: latest-worker2/172.17.0.12\nStart Time: Sat, 04 Apr 2020 17:32:02 +0000\nLabels: app=agnhost\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.1.197\nIPs:\n IP: 10.244.1.197\nControlled By: ReplicationController/agnhost-master\nContainers:\n agnhost-master:\n Container ID: containerd://0116063a712514ce6b1ee51453dce84cb6a6cf1651e81474653ace87660488d9\n Image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\n Image ID: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Sat, 04 Apr 2020 17:32:05 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-6ntg6 (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-6ntg6:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-6ntg6\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled default-scheduler Successfully assigned kubectl-5452/agnhost-master-8chlh to latest-worker2\n Normal Pulled 3s kubelet, latest-worker2 Container image \"us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\" already present on machine\n Normal Created 2s kubelet, latest-worker2 Created container agnhost-master\n Normal Started 1s kubelet, latest-worker2 Started container agnhost-master\n" Apr 4 17:32:06.075: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-5452' Apr 4 17:32:06.219: INFO: stderr: "" Apr 4 17:32:06.219: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-5452\nSelector: app=agnhost,role=master\nLabels: app=agnhost\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=master\n Containers:\n agnhost-master:\n Image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 4s replication-controller Created pod: agnhost-master-8chlh\n" Apr 4 17:32:06.219: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-5452' Apr 4 17:32:06.340: INFO: stderr: "" Apr 4 17:32:06.340: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-5452\nLabels: app=agnhost\n role=master\nAnnotations: \nSelector: app=agnhost,role=master\nType: ClusterIP\nIP: 10.96.116.98\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.244.1.197:6379\nSession Affinity: None\nEvents: \n" Apr 4 17:32:06.343: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config describe node latest-control-plane' Apr 4 17:32:06.463: INFO: stderr: "" Apr 4 17:32:06.463: INFO: stdout: "Name: latest-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=latest-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 15 Mar 2020 18:27:32 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: latest-control-plane\n AcquireTime: \n RenewTime: Sat, 04 Apr 2020 17:32:04 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Sat, 04 Apr 2020 17:30:40 +0000 Sun, 15 Mar 2020 18:27:32 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Sat, 04 Apr 2020 17:30:40 +0000 Sun, 15 Mar 2020 18:27:32 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Sat, 04 Apr 2020 17:30:40 +0000 Sun, 15 Mar 2020 18:27:32 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Sat, 04 Apr 2020 17:30:40 +0000 Sun, 15 Mar 2020 18:28:05 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.11\n Hostname: latest-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nSystem Info:\n Machine ID: 96fd1b5d260b433d8f617f455164eb5a\n System UUID: 611bedf3-8581-4e6e-a43b-01a437bb59ad\n Boot ID: ca2aa731-f890-4956-92a1-ff8c7560d571\n Kernel Version: 4.15.0-88-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.2\n Kubelet Version: v1.17.0\n Kube-Proxy Version: v1.17.0\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nNon-terminated Pods: (9 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system coredns-6955765f44-f7wtl 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 19d\n kube-system coredns-6955765f44-lq4t7 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 19d\n kube-system etcd-latest-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 19d\n kube-system kindnet-sx5s7 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 19d\n kube-system kube-apiserver-latest-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 19d\n kube-system kube-controller-manager-latest-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 19d\n kube-system kube-proxy-jpqvf 0 (0%) 0 (0%) 0 (0%) 0 (0%) 19d\n kube-system kube-scheduler-latest-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 19d\n local-path-storage local-path-provisioner-7745554f7f-fmsmz 0 (0%) 0 (0%) 0 (0%) 0 (0%) 19d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 850m (5%) 100m (0%)\n memory 190Mi (0%) 390Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\n hugepages-1Gi 0 (0%) 0 (0%)\n hugepages-2Mi 0 (0%) 0 (0%)\nEvents: \n" Apr 4 17:32:06.463: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config describe namespace kubectl-5452' Apr 4 17:32:06.568: INFO: stderr: "" Apr 4 17:32:06.568: INFO: stdout: "Name: kubectl-5452\nLabels: e2e-framework=kubectl\n e2e-run=74e16cc9-23f1-4871-9151-dd88cbcd20ec\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 17:32:06.568: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5452" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":281,"completed":45,"skipped":724,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 17:32:06.576: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Ensuring resource quota status captures service creation STEP: Deleting a Service STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 17:32:17.892: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-486" for this suite. • [SLOW TEST:11.323 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":281,"completed":46,"skipped":738,"failed":0} SSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 17:32:17.899: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-7319 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating stateful set ss in namespace statefulset-7319 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-7319 Apr 4 17:32:18.162: INFO: Found 0 stateful pods, waiting for 1 Apr 4 17:32:28.167: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Apr 4 17:32:28.170: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7319 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 4 17:32:28.478: INFO: stderr: "I0404 17:32:28.304006 773 log.go:172] (0xc000680160) (0xc000bf0140) Create stream\nI0404 17:32:28.304080 773 log.go:172] (0xc000680160) (0xc000bf0140) Stream added, broadcasting: 1\nI0404 17:32:28.309247 773 log.go:172] (0xc000680160) Reply frame received for 1\nI0404 17:32:28.309333 773 log.go:172] (0xc000680160) (0xc000a68000) Create stream\nI0404 17:32:28.309365 773 log.go:172] (0xc000680160) (0xc000a68000) Stream added, broadcasting: 3\nI0404 17:32:28.310494 773 log.go:172] (0xc000680160) Reply frame received for 3\nI0404 17:32:28.310530 773 log.go:172] (0xc000680160) (0xc00065a820) Create stream\nI0404 17:32:28.310541 773 log.go:172] (0xc000680160) (0xc00065a820) Stream added, broadcasting: 5\nI0404 17:32:28.311736 773 log.go:172] (0xc000680160) Reply frame received for 5\nI0404 17:32:28.373550 773 log.go:172] (0xc000680160) Data frame received for 5\nI0404 17:32:28.373576 773 log.go:172] (0xc00065a820) (5) Data frame handling\nI0404 17:32:28.373592 773 log.go:172] (0xc00065a820) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0404 17:32:28.470740 773 log.go:172] (0xc000680160) Data frame received for 3\nI0404 17:32:28.470798 773 log.go:172] (0xc000a68000) (3) Data frame handling\nI0404 17:32:28.470815 773 log.go:172] (0xc000a68000) (3) Data frame sent\nI0404 17:32:28.470827 773 log.go:172] (0xc000680160) Data frame received for 3\nI0404 17:32:28.470838 773 log.go:172] (0xc000a68000) (3) Data frame handling\nI0404 17:32:28.470897 773 log.go:172] (0xc000680160) Data frame received for 5\nI0404 17:32:28.470947 773 log.go:172] (0xc00065a820) (5) Data frame handling\nI0404 17:32:28.472378 773 log.go:172] (0xc000680160) Data frame received for 1\nI0404 17:32:28.472405 773 log.go:172] (0xc000bf0140) (1) Data frame handling\nI0404 17:32:28.472421 773 log.go:172] (0xc000bf0140) (1) Data frame sent\nI0404 17:32:28.472444 773 log.go:172] (0xc000680160) (0xc000bf0140) Stream removed, broadcasting: 1\nI0404 17:32:28.472473 773 log.go:172] (0xc000680160) Go away received\nI0404 17:32:28.472892 773 log.go:172] (0xc000680160) (0xc000bf0140) Stream removed, broadcasting: 1\nI0404 17:32:28.472914 773 log.go:172] (0xc000680160) (0xc000a68000) Stream removed, broadcasting: 3\nI0404 17:32:28.472925 773 log.go:172] (0xc000680160) (0xc00065a820) Stream removed, broadcasting: 5\n" Apr 4 17:32:28.478: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 4 17:32:28.478: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 4 17:32:28.482: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Apr 4 17:32:38.487: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 4 17:32:38.487: INFO: Waiting for statefulset status.replicas updated to 0 Apr 4 17:32:38.506: INFO: POD NODE PHASE GRACE CONDITIONS Apr 4 17:32:38.506: INFO: ss-0 latest-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 17:32:18 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 17:32:28 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 17:32:28 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 17:32:18 +0000 UTC }] Apr 4 17:32:38.506: INFO: Apr 4 17:32:38.506: INFO: StatefulSet ss has not reached scale 3, at 1 Apr 4 17:32:39.511: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.990043934s Apr 4 17:32:40.516: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.984993458s Apr 4 17:32:41.639: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.980290718s Apr 4 17:32:42.643: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.856972084s Apr 4 17:32:43.647: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.852991121s Apr 4 17:32:44.652: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.848655775s Apr 4 17:32:45.657: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.843669098s Apr 4 17:32:46.662: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.838818936s Apr 4 17:32:47.667: INFO: Verifying statefulset ss doesn't scale past 3 for another 834.304606ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-7319 Apr 4 17:32:48.673: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7319 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 4 17:32:49.041: INFO: stderr: "I0404 17:32:48.960946 796 log.go:172] (0xc00003a4d0) (0xc0003752c0) Create stream\nI0404 17:32:48.960997 796 log.go:172] (0xc00003a4d0) (0xc0003752c0) Stream added, broadcasting: 1\nI0404 17:32:48.963987 796 log.go:172] (0xc00003a4d0) Reply frame received for 1\nI0404 17:32:48.964038 796 log.go:172] (0xc00003a4d0) (0xc00092c000) Create stream\nI0404 17:32:48.964053 796 log.go:172] (0xc00003a4d0) (0xc00092c000) Stream added, broadcasting: 3\nI0404 17:32:48.964877 796 log.go:172] (0xc00003a4d0) Reply frame received for 3\nI0404 17:32:48.964912 796 log.go:172] (0xc00003a4d0) (0xc0005c0f00) Create stream\nI0404 17:32:48.964923 796 log.go:172] (0xc00003a4d0) (0xc0005c0f00) Stream added, broadcasting: 5\nI0404 17:32:48.965812 796 log.go:172] (0xc00003a4d0) Reply frame received for 5\nI0404 17:32:49.035392 796 log.go:172] (0xc00003a4d0) Data frame received for 5\nI0404 17:32:49.035428 796 log.go:172] (0xc0005c0f00) (5) Data frame handling\nI0404 17:32:49.035443 796 log.go:172] (0xc0005c0f00) (5) Data frame sent\nI0404 17:32:49.035452 796 log.go:172] (0xc00003a4d0) Data frame received for 5\nI0404 17:32:49.035458 796 log.go:172] (0xc0005c0f00) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0404 17:32:49.035485 796 log.go:172] (0xc00003a4d0) Data frame received for 3\nI0404 17:32:49.035497 796 log.go:172] (0xc00092c000) (3) Data frame handling\nI0404 17:32:49.035514 796 log.go:172] (0xc00092c000) (3) Data frame sent\nI0404 17:32:49.035524 796 log.go:172] (0xc00003a4d0) Data frame received for 3\nI0404 17:32:49.035535 796 log.go:172] (0xc00092c000) (3) Data frame handling\nI0404 17:32:49.036782 796 log.go:172] (0xc00003a4d0) Data frame received for 1\nI0404 17:32:49.036816 796 log.go:172] (0xc0003752c0) (1) Data frame handling\nI0404 17:32:49.036840 796 log.go:172] (0xc0003752c0) (1) Data frame sent\nI0404 17:32:49.036859 796 log.go:172] (0xc00003a4d0) (0xc0003752c0) Stream removed, broadcasting: 1\nI0404 17:32:49.036881 796 log.go:172] (0xc00003a4d0) Go away received\nI0404 17:32:49.037308 796 log.go:172] (0xc00003a4d0) (0xc0003752c0) Stream removed, broadcasting: 1\nI0404 17:32:49.037323 796 log.go:172] (0xc00003a4d0) (0xc00092c000) Stream removed, broadcasting: 3\nI0404 17:32:49.037330 796 log.go:172] (0xc00003a4d0) (0xc0005c0f00) Stream removed, broadcasting: 5\n" Apr 4 17:32:49.041: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 4 17:32:49.041: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 4 17:32:49.041: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7319 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 4 17:32:49.256: INFO: stderr: "I0404 17:32:49.176022 818 log.go:172] (0xc0003cbc30) (0xc000948280) Create stream\nI0404 17:32:49.176088 818 log.go:172] (0xc0003cbc30) (0xc000948280) Stream added, broadcasting: 1\nI0404 17:32:49.179261 818 log.go:172] (0xc0003cbc30) Reply frame received for 1\nI0404 17:32:49.179312 818 log.go:172] (0xc0003cbc30) (0xc000555400) Create stream\nI0404 17:32:49.179327 818 log.go:172] (0xc0003cbc30) (0xc000555400) Stream added, broadcasting: 3\nI0404 17:32:49.180336 818 log.go:172] (0xc0003cbc30) Reply frame received for 3\nI0404 17:32:49.180391 818 log.go:172] (0xc0003cbc30) (0xc000948320) Create stream\nI0404 17:32:49.180409 818 log.go:172] (0xc0003cbc30) (0xc000948320) Stream added, broadcasting: 5\nI0404 17:32:49.181662 818 log.go:172] (0xc0003cbc30) Reply frame received for 5\nI0404 17:32:49.249378 818 log.go:172] (0xc0003cbc30) Data frame received for 3\nI0404 17:32:49.249434 818 log.go:172] (0xc000555400) (3) Data frame handling\nI0404 17:32:49.249453 818 log.go:172] (0xc000555400) (3) Data frame sent\nI0404 17:32:49.249468 818 log.go:172] (0xc0003cbc30) Data frame received for 3\nI0404 17:32:49.249484 818 log.go:172] (0xc000555400) (3) Data frame handling\nI0404 17:32:49.249523 818 log.go:172] (0xc0003cbc30) Data frame received for 5\nI0404 17:32:49.249561 818 log.go:172] (0xc000948320) (5) Data frame handling\nI0404 17:32:49.249581 818 log.go:172] (0xc000948320) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0404 17:32:49.249594 818 log.go:172] (0xc0003cbc30) Data frame received for 5\nI0404 17:32:49.249663 818 log.go:172] (0xc000948320) (5) Data frame handling\nI0404 17:32:49.251399 818 log.go:172] (0xc0003cbc30) Data frame received for 1\nI0404 17:32:49.251429 818 log.go:172] (0xc000948280) (1) Data frame handling\nI0404 17:32:49.251442 818 log.go:172] (0xc000948280) (1) Data frame sent\nI0404 17:32:49.251458 818 log.go:172] (0xc0003cbc30) (0xc000948280) Stream removed, broadcasting: 1\nI0404 17:32:49.251501 818 log.go:172] (0xc0003cbc30) Go away received\nI0404 17:32:49.251765 818 log.go:172] (0xc0003cbc30) (0xc000948280) Stream removed, broadcasting: 1\nI0404 17:32:49.251781 818 log.go:172] (0xc0003cbc30) (0xc000555400) Stream removed, broadcasting: 3\nI0404 17:32:49.251789 818 log.go:172] (0xc0003cbc30) (0xc000948320) Stream removed, broadcasting: 5\n" Apr 4 17:32:49.256: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 4 17:32:49.256: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 4 17:32:49.256: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7319 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 4 17:32:49.462: INFO: stderr: "I0404 17:32:49.389043 841 log.go:172] (0xc00067cb00) (0xc0008c60a0) Create stream\nI0404 17:32:49.389098 841 log.go:172] (0xc00067cb00) (0xc0008c60a0) Stream added, broadcasting: 1\nI0404 17:32:49.392019 841 log.go:172] (0xc00067cb00) Reply frame received for 1\nI0404 17:32:49.392063 841 log.go:172] (0xc00067cb00) (0xc0007f30e0) Create stream\nI0404 17:32:49.392076 841 log.go:172] (0xc00067cb00) (0xc0007f30e0) Stream added, broadcasting: 3\nI0404 17:32:49.393388 841 log.go:172] (0xc00067cb00) Reply frame received for 3\nI0404 17:32:49.393426 841 log.go:172] (0xc00067cb00) (0xc0008c61e0) Create stream\nI0404 17:32:49.393439 841 log.go:172] (0xc00067cb00) (0xc0008c61e0) Stream added, broadcasting: 5\nI0404 17:32:49.394549 841 log.go:172] (0xc00067cb00) Reply frame received for 5\nI0404 17:32:49.452585 841 log.go:172] (0xc00067cb00) Data frame received for 3\nI0404 17:32:49.452614 841 log.go:172] (0xc0007f30e0) (3) Data frame handling\nI0404 17:32:49.452622 841 log.go:172] (0xc0007f30e0) (3) Data frame sent\nI0404 17:32:49.452642 841 log.go:172] (0xc00067cb00) Data frame received for 5\nI0404 17:32:49.452647 841 log.go:172] (0xc0008c61e0) (5) Data frame handling\nI0404 17:32:49.452652 841 log.go:172] (0xc0008c61e0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0404 17:32:49.452739 841 log.go:172] (0xc00067cb00) Data frame received for 3\nI0404 17:32:49.452787 841 log.go:172] (0xc0007f30e0) (3) Data frame handling\nI0404 17:32:49.452829 841 log.go:172] (0xc00067cb00) Data frame received for 5\nI0404 17:32:49.452851 841 log.go:172] (0xc0008c61e0) (5) Data frame handling\nI0404 17:32:49.454702 841 log.go:172] (0xc00067cb00) Data frame received for 1\nI0404 17:32:49.454738 841 log.go:172] (0xc0008c60a0) (1) Data frame handling\nI0404 17:32:49.454764 841 log.go:172] (0xc0008c60a0) (1) Data frame sent\nI0404 17:32:49.454799 841 log.go:172] (0xc00067cb00) (0xc0008c60a0) Stream removed, broadcasting: 1\nI0404 17:32:49.455269 841 log.go:172] (0xc00067cb00) (0xc0008c60a0) Stream removed, broadcasting: 1\nI0404 17:32:49.455295 841 log.go:172] (0xc00067cb00) (0xc0007f30e0) Stream removed, broadcasting: 3\nI0404 17:32:49.455528 841 log.go:172] (0xc00067cb00) (0xc0008c61e0) Stream removed, broadcasting: 5\n" Apr 4 17:32:49.462: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 4 17:32:49.462: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 4 17:32:49.466: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Apr 4 17:32:59.471: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Apr 4 17:32:59.471: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Apr 4 17:32:59.471: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Apr 4 17:32:59.475: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7319 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 4 17:32:59.661: INFO: stderr: "I0404 17:32:59.594867 864 log.go:172] (0xc000910000) (0xc0006035e0) Create stream\nI0404 17:32:59.594928 864 log.go:172] (0xc000910000) (0xc0006035e0) Stream added, broadcasting: 1\nI0404 17:32:59.597894 864 log.go:172] (0xc000910000) Reply frame received for 1\nI0404 17:32:59.597938 864 log.go:172] (0xc000910000) (0xc000456be0) Create stream\nI0404 17:32:59.597954 864 log.go:172] (0xc000910000) (0xc000456be0) Stream added, broadcasting: 3\nI0404 17:32:59.598783 864 log.go:172] (0xc000910000) Reply frame received for 3\nI0404 17:32:59.598814 864 log.go:172] (0xc000910000) (0xc0008f4000) Create stream\nI0404 17:32:59.598825 864 log.go:172] (0xc000910000) (0xc0008f4000) Stream added, broadcasting: 5\nI0404 17:32:59.599742 864 log.go:172] (0xc000910000) Reply frame received for 5\nI0404 17:32:59.653350 864 log.go:172] (0xc000910000) Data frame received for 3\nI0404 17:32:59.653393 864 log.go:172] (0xc000456be0) (3) Data frame handling\nI0404 17:32:59.653408 864 log.go:172] (0xc000456be0) (3) Data frame sent\nI0404 17:32:59.653421 864 log.go:172] (0xc000910000) Data frame received for 3\nI0404 17:32:59.653440 864 log.go:172] (0xc000456be0) (3) Data frame handling\nI0404 17:32:59.653505 864 log.go:172] (0xc000910000) Data frame received for 5\nI0404 17:32:59.653545 864 log.go:172] (0xc0008f4000) (5) Data frame handling\nI0404 17:32:59.653567 864 log.go:172] (0xc0008f4000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0404 17:32:59.653602 864 log.go:172] (0xc000910000) Data frame received for 5\nI0404 17:32:59.653648 864 log.go:172] (0xc0008f4000) (5) Data frame handling\nI0404 17:32:59.655717 864 log.go:172] (0xc000910000) Data frame received for 1\nI0404 17:32:59.655744 864 log.go:172] (0xc0006035e0) (1) Data frame handling\nI0404 17:32:59.655756 864 log.go:172] (0xc0006035e0) (1) Data frame sent\nI0404 17:32:59.655770 864 log.go:172] (0xc000910000) (0xc0006035e0) Stream removed, broadcasting: 1\nI0404 17:32:59.655788 864 log.go:172] (0xc000910000) Go away received\nI0404 17:32:59.656295 864 log.go:172] (0xc000910000) (0xc0006035e0) Stream removed, broadcasting: 1\nI0404 17:32:59.656332 864 log.go:172] (0xc000910000) (0xc000456be0) Stream removed, broadcasting: 3\nI0404 17:32:59.656351 864 log.go:172] (0xc000910000) (0xc0008f4000) Stream removed, broadcasting: 5\n" Apr 4 17:32:59.662: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 4 17:32:59.662: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 4 17:32:59.662: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7319 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 4 17:32:59.997: INFO: stderr: "I0404 17:32:59.783813 884 log.go:172] (0xc00054dad0) (0xc00047ea00) Create stream\nI0404 17:32:59.783860 884 log.go:172] (0xc00054dad0) (0xc00047ea00) Stream added, broadcasting: 1\nI0404 17:32:59.785959 884 log.go:172] (0xc00054dad0) Reply frame received for 1\nI0404 17:32:59.786002 884 log.go:172] (0xc00054dad0) (0xc0009a8000) Create stream\nI0404 17:32:59.786017 884 log.go:172] (0xc00054dad0) (0xc0009a8000) Stream added, broadcasting: 3\nI0404 17:32:59.786748 884 log.go:172] (0xc00054dad0) Reply frame received for 3\nI0404 17:32:59.786785 884 log.go:172] (0xc00054dad0) (0xc0008e6000) Create stream\nI0404 17:32:59.786805 884 log.go:172] (0xc00054dad0) (0xc0008e6000) Stream added, broadcasting: 5\nI0404 17:32:59.787453 884 log.go:172] (0xc00054dad0) Reply frame received for 5\nI0404 17:32:59.843909 884 log.go:172] (0xc00054dad0) Data frame received for 5\nI0404 17:32:59.843937 884 log.go:172] (0xc0008e6000) (5) Data frame handling\nI0404 17:32:59.843956 884 log.go:172] (0xc0008e6000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0404 17:32:59.989035 884 log.go:172] (0xc00054dad0) Data frame received for 3\nI0404 17:32:59.989094 884 log.go:172] (0xc0009a8000) (3) Data frame handling\nI0404 17:32:59.989296 884 log.go:172] (0xc0009a8000) (3) Data frame sent\nI0404 17:32:59.989333 884 log.go:172] (0xc00054dad0) Data frame received for 3\nI0404 17:32:59.989355 884 log.go:172] (0xc0009a8000) (3) Data frame handling\nI0404 17:32:59.989389 884 log.go:172] (0xc00054dad0) Data frame received for 5\nI0404 17:32:59.989408 884 log.go:172] (0xc0008e6000) (5) Data frame handling\nI0404 17:32:59.991562 884 log.go:172] (0xc00054dad0) Data frame received for 1\nI0404 17:32:59.991574 884 log.go:172] (0xc00047ea00) (1) Data frame handling\nI0404 17:32:59.991602 884 log.go:172] (0xc00047ea00) (1) Data frame sent\nI0404 17:32:59.991749 884 log.go:172] (0xc00054dad0) (0xc00047ea00) Stream removed, broadcasting: 1\nI0404 17:32:59.991791 884 log.go:172] (0xc00054dad0) Go away received\nI0404 17:32:59.992328 884 log.go:172] (0xc00054dad0) (0xc00047ea00) Stream removed, broadcasting: 1\nI0404 17:32:59.992354 884 log.go:172] (0xc00054dad0) (0xc0009a8000) Stream removed, broadcasting: 3\nI0404 17:32:59.992367 884 log.go:172] (0xc00054dad0) (0xc0008e6000) Stream removed, broadcasting: 5\n" Apr 4 17:32:59.998: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 4 17:32:59.998: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 4 17:32:59.998: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7319 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 4 17:33:00.312: INFO: stderr: "I0404 17:33:00.202901 906 log.go:172] (0xc00099a000) (0xc000a7e0a0) Create stream\nI0404 17:33:00.202971 906 log.go:172] (0xc00099a000) (0xc000a7e0a0) Stream added, broadcasting: 1\nI0404 17:33:00.207086 906 log.go:172] (0xc00099a000) Reply frame received for 1\nI0404 17:33:00.207169 906 log.go:172] (0xc00099a000) (0xc00052d4a0) Create stream\nI0404 17:33:00.207205 906 log.go:172] (0xc00099a000) (0xc00052d4a0) Stream added, broadcasting: 3\nI0404 17:33:00.208039 906 log.go:172] (0xc00099a000) Reply frame received for 3\nI0404 17:33:00.208082 906 log.go:172] (0xc00099a000) (0xc0003a2b40) Create stream\nI0404 17:33:00.208094 906 log.go:172] (0xc00099a000) (0xc0003a2b40) Stream added, broadcasting: 5\nI0404 17:33:00.208912 906 log.go:172] (0xc00099a000) Reply frame received for 5\nI0404 17:33:00.257333 906 log.go:172] (0xc00099a000) Data frame received for 5\nI0404 17:33:00.257361 906 log.go:172] (0xc0003a2b40) (5) Data frame handling\nI0404 17:33:00.257381 906 log.go:172] (0xc0003a2b40) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0404 17:33:00.304845 906 log.go:172] (0xc00099a000) Data frame received for 5\nI0404 17:33:00.304889 906 log.go:172] (0xc0003a2b40) (5) Data frame handling\nI0404 17:33:00.304914 906 log.go:172] (0xc00099a000) Data frame received for 3\nI0404 17:33:00.304928 906 log.go:172] (0xc00052d4a0) (3) Data frame handling\nI0404 17:33:00.304944 906 log.go:172] (0xc00052d4a0) (3) Data frame sent\nI0404 17:33:00.304964 906 log.go:172] (0xc00099a000) Data frame received for 3\nI0404 17:33:00.304982 906 log.go:172] (0xc00052d4a0) (3) Data frame handling\nI0404 17:33:00.306926 906 log.go:172] (0xc00099a000) Data frame received for 1\nI0404 17:33:00.306958 906 log.go:172] (0xc000a7e0a0) (1) Data frame handling\nI0404 17:33:00.306974 906 log.go:172] (0xc000a7e0a0) (1) Data frame sent\nI0404 17:33:00.306990 906 log.go:172] (0xc00099a000) (0xc000a7e0a0) Stream removed, broadcasting: 1\nI0404 17:33:00.307064 906 log.go:172] (0xc00099a000) Go away received\nI0404 17:33:00.307379 906 log.go:172] (0xc00099a000) (0xc000a7e0a0) Stream removed, broadcasting: 1\nI0404 17:33:00.307398 906 log.go:172] (0xc00099a000) (0xc00052d4a0) Stream removed, broadcasting: 3\nI0404 17:33:00.307409 906 log.go:172] (0xc00099a000) (0xc0003a2b40) Stream removed, broadcasting: 5\n" Apr 4 17:33:00.312: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 4 17:33:00.312: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 4 17:33:00.312: INFO: Waiting for statefulset status.replicas updated to 0 Apr 4 17:33:00.315: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Apr 4 17:33:10.336: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 4 17:33:10.336: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Apr 4 17:33:10.336: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Apr 4 17:33:10.356: INFO: POD NODE PHASE GRACE CONDITIONS Apr 4 17:33:10.356: INFO: ss-0 latest-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 17:32:18 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 17:32:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 17:32:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 17:32:18 +0000 UTC }] Apr 4 17:33:10.356: INFO: ss-1 latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 17:32:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 17:33:00 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 17:33:00 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 17:32:38 +0000 UTC }] Apr 4 17:33:10.356: INFO: ss-2 latest-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 17:32:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 17:33:00 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 17:33:00 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 17:32:38 +0000 UTC }] Apr 4 17:33:10.356: INFO: Apr 4 17:33:10.356: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 4 17:33:11.361: INFO: POD NODE PHASE GRACE CONDITIONS Apr 4 17:33:11.361: INFO: ss-0 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 17:32:18 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 17:32:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 17:32:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 17:32:18 +0000 UTC }] Apr 4 17:33:11.361: INFO: ss-1 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 17:32:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 17:33:00 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 17:33:00 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 17:32:38 +0000 UTC }] Apr 4 17:33:11.361: INFO: ss-2 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 17:32:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 17:33:00 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 17:33:00 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 17:32:38 +0000 UTC }] Apr 4 17:33:11.361: INFO: Apr 4 17:33:11.361: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 4 17:33:12.476: INFO: POD NODE PHASE GRACE CONDITIONS Apr 4 17:33:12.477: INFO: ss-0 latest-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 17:32:18 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 17:32:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 17:32:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 17:32:18 +0000 UTC }] Apr 4 17:33:12.477: INFO: ss-1 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 17:32:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 17:33:00 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 17:33:00 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 17:32:38 +0000 UTC }] Apr 4 17:33:12.477: INFO: ss-2 latest-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 17:32:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 17:33:00 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 17:33:00 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 17:32:38 +0000 UTC }] Apr 4 17:33:12.477: INFO: Apr 4 17:33:12.477: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 4 17:33:13.481: INFO: POD NODE PHASE GRACE CONDITIONS Apr 4 17:33:13.481: INFO: ss-0 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 17:32:18 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 17:32:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 17:32:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 17:32:18 +0000 UTC }] Apr 4 17:33:13.481: INFO: ss-1 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 17:32:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 17:33:00 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 17:33:00 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 17:32:38 +0000 UTC }] Apr 4 17:33:13.481: INFO: ss-2 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 17:32:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 17:33:00 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 17:33:00 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 17:32:38 +0000 UTC }] Apr 4 17:33:13.481: INFO: Apr 4 17:33:13.481: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 4 17:33:14.486: INFO: POD NODE PHASE GRACE CONDITIONS Apr 4 17:33:14.486: INFO: ss-0 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 17:32:18 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 17:32:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 17:32:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 17:32:18 +0000 UTC }] Apr 4 17:33:14.486: INFO: ss-1 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 17:32:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 17:33:00 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 17:33:00 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 17:32:38 +0000 UTC }] Apr 4 17:33:14.486: INFO: ss-2 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 17:32:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 17:33:00 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 17:33:00 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 17:32:38 +0000 UTC }] Apr 4 17:33:14.486: INFO: Apr 4 17:33:14.486: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 4 17:33:15.492: INFO: POD NODE PHASE GRACE CONDITIONS Apr 4 17:33:15.492: INFO: ss-0 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 17:32:18 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 17:32:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 17:32:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 17:32:18 +0000 UTC }] Apr 4 17:33:15.492: INFO: ss-1 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 17:32:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 17:33:00 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 17:33:00 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 17:32:38 +0000 UTC }] Apr 4 17:33:15.492: INFO: ss-2 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 17:32:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 17:33:00 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 17:33:00 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 17:32:38 +0000 UTC }] Apr 4 17:33:15.492: INFO: Apr 4 17:33:15.492: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 4 17:33:16.497: INFO: POD NODE PHASE GRACE CONDITIONS Apr 4 17:33:16.497: INFO: ss-0 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 17:32:18 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 17:32:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 17:32:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 17:32:18 +0000 UTC }] Apr 4 17:33:16.497: INFO: ss-1 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 17:32:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 17:33:00 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 17:33:00 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 17:32:38 +0000 UTC }] Apr 4 17:33:16.497: INFO: ss-2 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 17:32:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 17:33:00 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 17:33:00 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 17:32:38 +0000 UTC }] Apr 4 17:33:16.497: INFO: Apr 4 17:33:16.497: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 4 17:33:17.502: INFO: POD NODE PHASE GRACE CONDITIONS Apr 4 17:33:17.502: INFO: ss-0 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 17:32:18 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 17:32:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 17:32:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 17:32:18 +0000 UTC }] Apr 4 17:33:17.502: INFO: ss-1 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 17:32:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 17:33:00 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 17:33:00 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 17:32:38 +0000 UTC }] Apr 4 17:33:17.502: INFO: ss-2 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 17:32:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 17:33:00 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 17:33:00 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 17:32:38 +0000 UTC }] Apr 4 17:33:17.502: INFO: Apr 4 17:33:17.502: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 4 17:33:18.507: INFO: POD NODE PHASE GRACE CONDITIONS Apr 4 17:33:18.507: INFO: ss-0 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 17:32:18 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 17:32:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 17:32:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 17:32:18 +0000 UTC }] Apr 4 17:33:18.507: INFO: ss-1 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 17:32:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 17:33:00 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 17:33:00 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 17:32:38 +0000 UTC }] Apr 4 17:33:18.507: INFO: ss-2 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 17:32:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 17:33:00 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 17:33:00 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 17:32:38 +0000 UTC }] Apr 4 17:33:18.507: INFO: Apr 4 17:33:18.507: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 4 17:33:19.511: INFO: POD NODE PHASE GRACE CONDITIONS Apr 4 17:33:19.511: INFO: ss-0 latest-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 17:32:18 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 17:32:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 17:32:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 17:32:18 +0000 UTC }] Apr 4 17:33:19.511: INFO: ss-1 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 17:32:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 17:33:00 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 17:33:00 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 17:32:38 +0000 UTC }] Apr 4 17:33:19.511: INFO: ss-2 latest-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 17:32:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 17:33:00 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-04 17:33:00 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-04 17:32:38 +0000 UTC }] Apr 4 17:33:19.511: INFO: Apr 4 17:33:19.511: INFO: StatefulSet ss has not reached scale 0, at 3 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-7319 Apr 4 17:33:20.516: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7319 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 4 17:33:20.642: INFO: rc: 1 Apr 4 17:33:20.642: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7319 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: error: unable to upgrade connection: container not found ("webserver") error: exit status 1 Apr 4 17:33:30.643: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7319 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 4 17:33:30.742: INFO: rc: 1 Apr 4 17:33:30.742: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7319 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 4 17:33:40.742: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7319 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 4 17:33:40.840: INFO: rc: 1 Apr 4 17:33:40.840: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7319 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 4 17:33:50.841: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7319 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 4 17:33:50.940: INFO: rc: 1 Apr 4 17:33:50.940: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7319 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 4 17:34:00.940: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7319 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 4 17:34:01.036: INFO: rc: 1 Apr 4 17:34:01.036: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7319 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 4 17:34:11.036: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7319 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 4 17:34:11.131: INFO: rc: 1 Apr 4 17:34:11.131: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7319 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 4 17:34:21.131: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7319 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 4 17:34:21.221: INFO: rc: 1 Apr 4 17:34:21.221: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7319 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 4 17:34:31.222: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7319 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 4 17:34:31.318: INFO: rc: 1 Apr 4 17:34:31.318: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7319 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 4 17:34:41.319: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7319 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 4 17:34:41.411: INFO: rc: 1 Apr 4 17:34:41.411: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7319 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 4 17:34:51.412: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7319 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 4 17:34:51.513: INFO: rc: 1 Apr 4 17:34:51.514: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7319 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 4 17:35:01.514: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7319 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 4 17:35:01.604: INFO: rc: 1 Apr 4 17:35:01.604: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7319 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 4 17:35:11.604: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7319 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 4 17:35:11.729: INFO: rc: 1 Apr 4 17:35:11.729: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7319 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 4 17:35:21.729: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7319 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 4 17:35:21.831: INFO: rc: 1 Apr 4 17:35:21.831: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7319 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 4 17:35:31.831: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7319 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 4 17:35:31.923: INFO: rc: 1 Apr 4 17:35:31.923: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7319 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 4 17:35:41.923: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7319 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 4 17:35:42.018: INFO: rc: 1 Apr 4 17:35:42.018: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7319 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 4 17:35:52.018: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7319 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 4 17:35:52.118: INFO: rc: 1 Apr 4 17:35:52.118: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7319 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 4 17:36:02.118: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7319 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 4 17:36:02.214: INFO: rc: 1 Apr 4 17:36:02.214: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7319 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 4 17:36:12.214: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7319 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 4 17:36:12.310: INFO: rc: 1 Apr 4 17:36:12.310: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7319 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 4 17:36:22.311: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7319 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 4 17:36:22.411: INFO: rc: 1 Apr 4 17:36:22.411: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7319 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 4 17:36:32.411: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7319 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 4 17:36:32.512: INFO: rc: 1 Apr 4 17:36:32.512: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7319 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 4 17:36:42.513: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7319 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 4 17:36:42.612: INFO: rc: 1 Apr 4 17:36:42.612: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7319 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 4 17:36:52.612: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7319 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 4 17:36:52.714: INFO: rc: 1 Apr 4 17:36:52.714: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7319 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 4 17:37:02.715: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7319 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 4 17:37:02.818: INFO: rc: 1 Apr 4 17:37:02.818: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7319 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 4 17:37:12.818: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7319 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 4 17:37:12.914: INFO: rc: 1 Apr 4 17:37:12.914: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7319 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 4 17:37:22.914: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7319 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 4 17:37:23.021: INFO: rc: 1 Apr 4 17:37:23.021: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7319 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 4 17:37:33.022: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7319 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 4 17:37:33.116: INFO: rc: 1 Apr 4 17:37:33.116: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7319 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 4 17:37:43.117: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7319 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 4 17:37:43.224: INFO: rc: 1 Apr 4 17:37:43.224: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7319 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 4 17:37:53.225: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7319 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 4 17:37:53.316: INFO: rc: 1 Apr 4 17:37:53.316: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7319 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 4 17:38:03.316: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7319 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 4 17:38:03.419: INFO: rc: 1 Apr 4 17:38:03.419: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7319 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 4 17:38:13.420: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7319 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 4 17:38:13.581: INFO: rc: 1 Apr 4 17:38:13.581: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7319 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 4 17:38:23.581: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7319 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 4 17:38:23.667: INFO: rc: 1 Apr 4 17:38:23.667: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: Apr 4 17:38:23.667: INFO: Scaling statefulset ss to 0 Apr 4 17:38:23.683: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Apr 4 17:38:23.686: INFO: Deleting all statefulset in ns statefulset-7319 Apr 4 17:38:23.688: INFO: Scaling statefulset ss to 0 Apr 4 17:38:23.695: INFO: Waiting for statefulset status.replicas updated to 0 Apr 4 17:38:23.697: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 17:38:23.803: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-7319" for this suite. • [SLOW TEST:365.911 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":281,"completed":47,"skipped":743,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 17:38:23.811: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Apr 4 17:38:23.931: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Apr 4 17:38:26.841: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1193 create -f -' Apr 4 17:38:31.891: INFO: stderr: "" Apr 4 17:38:31.891: INFO: stdout: "e2e-test-crd-publish-openapi-1175-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Apr 4 17:38:31.891: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1193 delete e2e-test-crd-publish-openapi-1175-crds test-cr' Apr 4 17:38:32.004: INFO: stderr: "" Apr 4 17:38:32.004: INFO: stdout: "e2e-test-crd-publish-openapi-1175-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" Apr 4 17:38:32.004: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1193 apply -f -' Apr 4 17:38:32.314: INFO: stderr: "" Apr 4 17:38:32.314: INFO: stdout: "e2e-test-crd-publish-openapi-1175-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Apr 4 17:38:32.314: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1193 delete e2e-test-crd-publish-openapi-1175-crds test-cr' Apr 4 17:38:32.415: INFO: stderr: "" Apr 4 17:38:32.415: INFO: stdout: "e2e-test-crd-publish-openapi-1175-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Apr 4 17:38:32.415: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-1175-crds' Apr 4 17:38:32.657: INFO: stderr: "" Apr 4 17:38:32.657: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-1175-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 17:38:35.568: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1193" for this suite. • [SLOW TEST:11.764 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":281,"completed":48,"skipped":777,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 17:38:35.576: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-projected-all-test-volume-455be907-888c-4896-a2a3-5806bb505d9b STEP: Creating secret with name secret-projected-all-test-volume-f18b1cee-2f28-4137-89d7-b48eaac2794b STEP: Creating a pod to test Check all projections for projected volume plugin Apr 4 17:38:35.694: INFO: Waiting up to 5m0s for pod "projected-volume-947f3c44-b15a-4b63-9980-fd8a31adf613" in namespace "projected-7961" to be "Succeeded or Failed" Apr 4 17:38:35.706: INFO: Pod "projected-volume-947f3c44-b15a-4b63-9980-fd8a31adf613": Phase="Pending", Reason="", readiness=false. Elapsed: 11.896551ms Apr 4 17:38:37.709: INFO: Pod "projected-volume-947f3c44-b15a-4b63-9980-fd8a31adf613": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014998764s Apr 4 17:38:39.713: INFO: Pod "projected-volume-947f3c44-b15a-4b63-9980-fd8a31adf613": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01913942s STEP: Saw pod success Apr 4 17:38:39.714: INFO: Pod "projected-volume-947f3c44-b15a-4b63-9980-fd8a31adf613" satisfied condition "Succeeded or Failed" Apr 4 17:38:39.715: INFO: Trying to get logs from node latest-worker pod projected-volume-947f3c44-b15a-4b63-9980-fd8a31adf613 container projected-all-volume-test: STEP: delete the pod Apr 4 17:38:39.754: INFO: Waiting for pod projected-volume-947f3c44-b15a-4b63-9980-fd8a31adf613 to disappear Apr 4 17:38:39.764: INFO: Pod projected-volume-947f3c44-b15a-4b63-9980-fd8a31adf613 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 17:38:39.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7961" for this suite. •{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":281,"completed":49,"skipped":783,"failed":0} S ------------------------------ [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 17:38:39.772: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:47 [It] should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 17:38:39.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-3492" for this suite. •{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":281,"completed":50,"skipped":784,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 17:38:39.840: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 17:38:53.918: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-4961" for this suite. • [SLOW TEST:14.084 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":281,"completed":51,"skipped":803,"failed":0} S ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 17:38:53.924: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test substitution in container's args Apr 4 17:38:54.019: INFO: Waiting up to 5m0s for pod "var-expansion-acfda46f-b9c6-4e1b-b212-5fa9d2a5e415" in namespace "var-expansion-4119" to be "Succeeded or Failed" Apr 4 17:38:54.029: INFO: Pod "var-expansion-acfda46f-b9c6-4e1b-b212-5fa9d2a5e415": Phase="Pending", Reason="", readiness=false. Elapsed: 9.211305ms Apr 4 17:38:56.032: INFO: Pod "var-expansion-acfda46f-b9c6-4e1b-b212-5fa9d2a5e415": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013132589s Apr 4 17:38:58.036: INFO: Pod "var-expansion-acfda46f-b9c6-4e1b-b212-5fa9d2a5e415": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017078962s STEP: Saw pod success Apr 4 17:38:58.036: INFO: Pod "var-expansion-acfda46f-b9c6-4e1b-b212-5fa9d2a5e415" satisfied condition "Succeeded or Failed" Apr 4 17:38:58.040: INFO: Trying to get logs from node latest-worker2 pod var-expansion-acfda46f-b9c6-4e1b-b212-5fa9d2a5e415 container dapi-container: STEP: delete the pod Apr 4 17:38:58.112: INFO: Waiting for pod var-expansion-acfda46f-b9c6-4e1b-b212-5fa9d2a5e415 to disappear Apr 4 17:38:58.124: INFO: Pod var-expansion-acfda46f-b9c6-4e1b-b212-5fa9d2a5e415 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 17:38:58.124: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-4119" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":281,"completed":52,"skipped":804,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 17:38:58.132: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-dbf9046f-ad61-4b94-a682-5c3dd3c9319f STEP: Creating a pod to test consume secrets Apr 4 17:38:58.252: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-d3ef4aa2-62c2-4862-bf86-ed8170cd9f6b" in namespace "projected-6292" to be "Succeeded or Failed" Apr 4 17:38:58.278: INFO: Pod "pod-projected-secrets-d3ef4aa2-62c2-4862-bf86-ed8170cd9f6b": Phase="Pending", Reason="", readiness=false. Elapsed: 26.22836ms Apr 4 17:39:00.282: INFO: Pod "pod-projected-secrets-d3ef4aa2-62c2-4862-bf86-ed8170cd9f6b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030057163s Apr 4 17:39:02.286: INFO: Pod "pod-projected-secrets-d3ef4aa2-62c2-4862-bf86-ed8170cd9f6b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.03453918s STEP: Saw pod success Apr 4 17:39:02.287: INFO: Pod "pod-projected-secrets-d3ef4aa2-62c2-4862-bf86-ed8170cd9f6b" satisfied condition "Succeeded or Failed" Apr 4 17:39:02.290: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-d3ef4aa2-62c2-4862-bf86-ed8170cd9f6b container projected-secret-volume-test: STEP: delete the pod Apr 4 17:39:02.308: INFO: Waiting for pod pod-projected-secrets-d3ef4aa2-62c2-4862-bf86-ed8170cd9f6b to disappear Apr 4 17:39:02.313: INFO: Pod pod-projected-secrets-d3ef4aa2-62c2-4862-bf86-ed8170cd9f6b no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 17:39:02.313: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6292" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":281,"completed":53,"skipped":818,"failed":0} SSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 17:39:02.319: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 17:39:02.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-1387" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":281,"completed":54,"skipped":821,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 17:39:02.571: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with configMap that has name projected-configmap-test-upd-aef40b78-2704-4cef-900e-719021f8cc9f STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-aef40b78-2704-4cef-900e-719021f8cc9f STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 17:40:14.962: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3991" for this suite. • [SLOW TEST:72.399 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":281,"completed":55,"skipped":830,"failed":0} [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 17:40:14.970: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on node default medium Apr 4 17:40:15.057: INFO: Waiting up to 5m0s for pod "pod-2cbeeb92-7ec5-4f0d-9762-1d3c3593bfd7" in namespace "emptydir-1850" to be "Succeeded or Failed" Apr 4 17:40:15.060: INFO: Pod "pod-2cbeeb92-7ec5-4f0d-9762-1d3c3593bfd7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.873546ms Apr 4 17:40:17.064: INFO: Pod "pod-2cbeeb92-7ec5-4f0d-9762-1d3c3593bfd7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006492646s Apr 4 17:40:19.067: INFO: Pod "pod-2cbeeb92-7ec5-4f0d-9762-1d3c3593bfd7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010029616s STEP: Saw pod success Apr 4 17:40:19.067: INFO: Pod "pod-2cbeeb92-7ec5-4f0d-9762-1d3c3593bfd7" satisfied condition "Succeeded or Failed" Apr 4 17:40:19.070: INFO: Trying to get logs from node latest-worker pod pod-2cbeeb92-7ec5-4f0d-9762-1d3c3593bfd7 container test-container: STEP: delete the pod Apr 4 17:40:19.086: INFO: Waiting for pod pod-2cbeeb92-7ec5-4f0d-9762-1d3c3593bfd7 to disappear Apr 4 17:40:19.097: INFO: Pod pod-2cbeeb92-7ec5-4f0d-9762-1d3c3593bfd7 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 17:40:19.097: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1850" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":281,"completed":56,"skipped":830,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 17:40:19.105: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... Apr 4 17:40:19.191: INFO: Created pod &Pod{ObjectMeta:{dns-882 dns-882 /api/v1/namespaces/dns-882/pods/dns-882 79f402ca-923c-479c-b76c-9825cfe1222b 5392976 0 2020-04-04 17:40:19 +0000 UTC map[] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lfrpm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lfrpm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lfrpm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 4 17:40:19.198: INFO: The status of Pod dns-882 is Pending, waiting for it to be Running (with Ready = true) Apr 4 17:40:21.202: INFO: The status of Pod dns-882 is Pending, waiting for it to be Running (with Ready = true) Apr 4 17:40:23.203: INFO: The status of Pod dns-882 is Running (Ready = true) STEP: Verifying customized DNS suffix list is configured on pod... Apr 4 17:40:23.203: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-882 PodName:dns-882 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 4 17:40:23.203: INFO: >>> kubeConfig: /root/.kube/config I0404 17:40:23.254053 7 log.go:172] (0xc0022ee0b0) (0xc001e581e0) Create stream I0404 17:40:23.254098 7 log.go:172] (0xc0022ee0b0) (0xc001e581e0) Stream added, broadcasting: 1 I0404 17:40:23.255915 7 log.go:172] (0xc0022ee0b0) Reply frame received for 1 I0404 17:40:23.255946 7 log.go:172] (0xc0022ee0b0) (0xc001b6e0a0) Create stream I0404 17:40:23.255956 7 log.go:172] (0xc0022ee0b0) (0xc001b6e0a0) Stream added, broadcasting: 3 I0404 17:40:23.256635 7 log.go:172] (0xc0022ee0b0) Reply frame received for 3 I0404 17:40:23.256672 7 log.go:172] (0xc0022ee0b0) (0xc001e58280) Create stream I0404 17:40:23.256692 7 log.go:172] (0xc0022ee0b0) (0xc001e58280) Stream added, broadcasting: 5 I0404 17:40:23.257721 7 log.go:172] (0xc0022ee0b0) Reply frame received for 5 I0404 17:40:23.343110 7 log.go:172] (0xc0022ee0b0) Data frame received for 3 I0404 17:40:23.343141 7 log.go:172] (0xc001b6e0a0) (3) Data frame handling I0404 17:40:23.343160 7 log.go:172] (0xc001b6e0a0) (3) Data frame sent I0404 17:40:23.343611 7 log.go:172] (0xc0022ee0b0) Data frame received for 5 I0404 17:40:23.343653 7 log.go:172] (0xc001e58280) (5) Data frame handling I0404 17:40:23.343690 7 log.go:172] (0xc0022ee0b0) Data frame received for 3 I0404 17:40:23.343714 7 log.go:172] (0xc001b6e0a0) (3) Data frame handling I0404 17:40:23.345775 7 log.go:172] (0xc0022ee0b0) Data frame received for 1 I0404 17:40:23.345797 7 log.go:172] (0xc001e581e0) (1) Data frame handling I0404 17:40:23.345813 7 log.go:172] (0xc001e581e0) (1) Data frame sent I0404 17:40:23.345837 7 log.go:172] (0xc0022ee0b0) (0xc001e581e0) Stream removed, broadcasting: 1 I0404 17:40:23.345981 7 log.go:172] (0xc0022ee0b0) Go away received I0404 17:40:23.346254 7 log.go:172] (0xc0022ee0b0) (0xc001e581e0) Stream removed, broadcasting: 1 I0404 17:40:23.346283 7 log.go:172] (0xc0022ee0b0) (0xc001b6e0a0) Stream removed, broadcasting: 3 I0404 17:40:23.346317 7 log.go:172] (0xc0022ee0b0) (0xc001e58280) Stream removed, broadcasting: 5 STEP: Verifying customized DNS server is configured on pod... Apr 4 17:40:23.346: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-882 PodName:dns-882 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 4 17:40:23.346: INFO: >>> kubeConfig: /root/.kube/config I0404 17:40:23.379885 7 log.go:172] (0xc0022ee840) (0xc001e58460) Create stream I0404 17:40:23.379922 7 log.go:172] (0xc0022ee840) (0xc001e58460) Stream added, broadcasting: 1 I0404 17:40:23.382044 7 log.go:172] (0xc0022ee840) Reply frame received for 1 I0404 17:40:23.382111 7 log.go:172] (0xc0022ee840) (0xc002bfd360) Create stream I0404 17:40:23.382135 7 log.go:172] (0xc0022ee840) (0xc002bfd360) Stream added, broadcasting: 3 I0404 17:40:23.383359 7 log.go:172] (0xc0022ee840) Reply frame received for 3 I0404 17:40:23.383402 7 log.go:172] (0xc0022ee840) (0xc001b6e140) Create stream I0404 17:40:23.383419 7 log.go:172] (0xc0022ee840) (0xc001b6e140) Stream added, broadcasting: 5 I0404 17:40:23.384537 7 log.go:172] (0xc0022ee840) Reply frame received for 5 I0404 17:40:23.454016 7 log.go:172] (0xc0022ee840) Data frame received for 3 I0404 17:40:23.454069 7 log.go:172] (0xc002bfd360) (3) Data frame handling I0404 17:40:23.454098 7 log.go:172] (0xc002bfd360) (3) Data frame sent I0404 17:40:23.455241 7 log.go:172] (0xc0022ee840) Data frame received for 5 I0404 17:40:23.455291 7 log.go:172] (0xc001b6e140) (5) Data frame handling I0404 17:40:23.455330 7 log.go:172] (0xc0022ee840) Data frame received for 3 I0404 17:40:23.455349 7 log.go:172] (0xc002bfd360) (3) Data frame handling I0404 17:40:23.456998 7 log.go:172] (0xc0022ee840) Data frame received for 1 I0404 17:40:23.457087 7 log.go:172] (0xc001e58460) (1) Data frame handling I0404 17:40:23.457282 7 log.go:172] (0xc001e58460) (1) Data frame sent I0404 17:40:23.457305 7 log.go:172] (0xc0022ee840) (0xc001e58460) Stream removed, broadcasting: 1 I0404 17:40:23.457323 7 log.go:172] (0xc0022ee840) Go away received I0404 17:40:23.457418 7 log.go:172] (0xc0022ee840) (0xc001e58460) Stream removed, broadcasting: 1 I0404 17:40:23.457429 7 log.go:172] (0xc0022ee840) (0xc002bfd360) Stream removed, broadcasting: 3 I0404 17:40:23.457452 7 log.go:172] (0xc0022ee840) (0xc001b6e140) Stream removed, broadcasting: 5 Apr 4 17:40:23.457: INFO: Deleting pod dns-882... [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 17:40:23.469: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-882" for this suite. •{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":281,"completed":57,"skipped":854,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 17:40:23.498: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test override command Apr 4 17:40:23.781: INFO: Waiting up to 5m0s for pod "client-containers-739c95bf-e1b7-4111-956f-dcc0b0e7aca4" in namespace "containers-3206" to be "Succeeded or Failed" Apr 4 17:40:23.822: INFO: Pod "client-containers-739c95bf-e1b7-4111-956f-dcc0b0e7aca4": Phase="Pending", Reason="", readiness=false. Elapsed: 40.497732ms Apr 4 17:40:25.826: INFO: Pod "client-containers-739c95bf-e1b7-4111-956f-dcc0b0e7aca4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044490881s Apr 4 17:40:27.830: INFO: Pod "client-containers-739c95bf-e1b7-4111-956f-dcc0b0e7aca4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.048860709s STEP: Saw pod success Apr 4 17:40:27.830: INFO: Pod "client-containers-739c95bf-e1b7-4111-956f-dcc0b0e7aca4" satisfied condition "Succeeded or Failed" Apr 4 17:40:27.833: INFO: Trying to get logs from node latest-worker pod client-containers-739c95bf-e1b7-4111-956f-dcc0b0e7aca4 container test-container: STEP: delete the pod Apr 4 17:40:27.854: INFO: Waiting for pod client-containers-739c95bf-e1b7-4111-956f-dcc0b0e7aca4 to disappear Apr 4 17:40:27.872: INFO: Pod client-containers-739c95bf-e1b7-4111-956f-dcc0b0e7aca4 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 17:40:27.872: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-3206" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":281,"completed":58,"skipped":921,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 17:40:27.880: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Apr 4 17:40:27.941: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3394ea0d-8e47-431c-92d6-e9beebd70ce3" in namespace "projected-453" to be "Succeeded or Failed" Apr 4 17:40:27.949: INFO: Pod "downwardapi-volume-3394ea0d-8e47-431c-92d6-e9beebd70ce3": Phase="Pending", Reason="", readiness=false. Elapsed: 8.51374ms Apr 4 17:40:29.952: INFO: Pod "downwardapi-volume-3394ea0d-8e47-431c-92d6-e9beebd70ce3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011536761s Apr 4 17:40:31.957: INFO: Pod "downwardapi-volume-3394ea0d-8e47-431c-92d6-e9beebd70ce3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016179071s STEP: Saw pod success Apr 4 17:40:31.957: INFO: Pod "downwardapi-volume-3394ea0d-8e47-431c-92d6-e9beebd70ce3" satisfied condition "Succeeded or Failed" Apr 4 17:40:31.960: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-3394ea0d-8e47-431c-92d6-e9beebd70ce3 container client-container: STEP: delete the pod Apr 4 17:40:31.994: INFO: Waiting for pod downwardapi-volume-3394ea0d-8e47-431c-92d6-e9beebd70ce3 to disappear Apr 4 17:40:32.015: INFO: Pod downwardapi-volume-3394ea0d-8e47-431c-92d6-e9beebd70ce3 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 17:40:32.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-453" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":281,"completed":59,"skipped":941,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 17:40:32.041: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-configmap-5d64 STEP: Creating a pod to test atomic-volume-subpath Apr 4 17:40:32.173: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-5d64" in namespace "subpath-5980" to be "Succeeded or Failed" Apr 4 17:40:32.209: INFO: Pod "pod-subpath-test-configmap-5d64": Phase="Pending", Reason="", readiness=false. Elapsed: 36.381115ms Apr 4 17:40:34.214: INFO: Pod "pod-subpath-test-configmap-5d64": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040816776s Apr 4 17:40:36.218: INFO: Pod "pod-subpath-test-configmap-5d64": Phase="Running", Reason="", readiness=true. Elapsed: 4.045298873s Apr 4 17:40:38.223: INFO: Pod "pod-subpath-test-configmap-5d64": Phase="Running", Reason="", readiness=true. Elapsed: 6.04996968s Apr 4 17:40:40.227: INFO: Pod "pod-subpath-test-configmap-5d64": Phase="Running", Reason="", readiness=true. Elapsed: 8.054177893s Apr 4 17:40:42.231: INFO: Pod "pod-subpath-test-configmap-5d64": Phase="Running", Reason="", readiness=true. Elapsed: 10.058240281s Apr 4 17:40:44.236: INFO: Pod "pod-subpath-test-configmap-5d64": Phase="Running", Reason="", readiness=true. Elapsed: 12.062954516s Apr 4 17:40:46.240: INFO: Pod "pod-subpath-test-configmap-5d64": Phase="Running", Reason="", readiness=true. Elapsed: 14.067382413s Apr 4 17:40:48.245: INFO: Pod "pod-subpath-test-configmap-5d64": Phase="Running", Reason="", readiness=true. Elapsed: 16.072092656s Apr 4 17:40:50.250: INFO: Pod "pod-subpath-test-configmap-5d64": Phase="Running", Reason="", readiness=true. Elapsed: 18.0766246s Apr 4 17:40:52.254: INFO: Pod "pod-subpath-test-configmap-5d64": Phase="Running", Reason="", readiness=true. Elapsed: 20.080912477s Apr 4 17:40:54.257: INFO: Pod "pod-subpath-test-configmap-5d64": Phase="Running", Reason="", readiness=true. Elapsed: 22.084538862s Apr 4 17:40:56.261: INFO: Pod "pod-subpath-test-configmap-5d64": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.088140524s STEP: Saw pod success Apr 4 17:40:56.261: INFO: Pod "pod-subpath-test-configmap-5d64" satisfied condition "Succeeded or Failed" Apr 4 17:40:56.264: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-configmap-5d64 container test-container-subpath-configmap-5d64: STEP: delete the pod Apr 4 17:40:56.284: INFO: Waiting for pod pod-subpath-test-configmap-5d64 to disappear Apr 4 17:40:56.289: INFO: Pod pod-subpath-test-configmap-5d64 no longer exists STEP: Deleting pod pod-subpath-test-configmap-5d64 Apr 4 17:40:56.289: INFO: Deleting pod "pod-subpath-test-configmap-5d64" in namespace "subpath-5980" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 17:40:56.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-5980" for this suite. • [SLOW TEST:24.259 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":281,"completed":60,"skipped":964,"failed":0} SSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 17:40:56.300: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service externalname-service with the type=ExternalName in namespace services-5171 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-5171 I0404 17:40:56.427888 7 runners.go:190] Created replication controller with name: externalname-service, namespace: services-5171, replica count: 2 I0404 17:40:59.478428 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0404 17:41:02.478682 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 4 17:41:02.478: INFO: Creating new exec pod Apr 4 17:41:07.494: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-5171 execpodd22vt -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Apr 4 17:41:07.694: INFO: stderr: "I0404 17:41:07.625682 1667 log.go:172] (0xc0000e6b00) (0xc000665180) Create stream\nI0404 17:41:07.625757 1667 log.go:172] (0xc0000e6b00) (0xc000665180) Stream added, broadcasting: 1\nI0404 17:41:07.628560 1667 log.go:172] (0xc0000e6b00) Reply frame received for 1\nI0404 17:41:07.628617 1667 log.go:172] (0xc0000e6b00) (0xc00094e000) Create stream\nI0404 17:41:07.628644 1667 log.go:172] (0xc0000e6b00) (0xc00094e000) Stream added, broadcasting: 3\nI0404 17:41:07.629899 1667 log.go:172] (0xc0000e6b00) Reply frame received for 3\nI0404 17:41:07.629938 1667 log.go:172] (0xc0000e6b00) (0xc00094e0a0) Create stream\nI0404 17:41:07.629951 1667 log.go:172] (0xc0000e6b00) (0xc00094e0a0) Stream added, broadcasting: 5\nI0404 17:41:07.630868 1667 log.go:172] (0xc0000e6b00) Reply frame received for 5\nI0404 17:41:07.687560 1667 log.go:172] (0xc0000e6b00) Data frame received for 5\nI0404 17:41:07.687592 1667 log.go:172] (0xc00094e0a0) (5) Data frame handling\nI0404 17:41:07.687612 1667 log.go:172] (0xc00094e0a0) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0404 17:41:07.688027 1667 log.go:172] (0xc0000e6b00) Data frame received for 5\nI0404 17:41:07.688064 1667 log.go:172] (0xc00094e0a0) (5) Data frame handling\nI0404 17:41:07.688093 1667 log.go:172] (0xc00094e0a0) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0404 17:41:07.688255 1667 log.go:172] (0xc0000e6b00) Data frame received for 5\nI0404 17:41:07.688284 1667 log.go:172] (0xc00094e0a0) (5) Data frame handling\nI0404 17:41:07.688338 1667 log.go:172] (0xc0000e6b00) Data frame received for 3\nI0404 17:41:07.688353 1667 log.go:172] (0xc00094e000) (3) Data frame handling\nI0404 17:41:07.690153 1667 log.go:172] (0xc0000e6b00) Data frame received for 1\nI0404 17:41:07.690198 1667 log.go:172] (0xc000665180) (1) Data frame handling\nI0404 17:41:07.690219 1667 log.go:172] (0xc000665180) (1) Data frame sent\nI0404 17:41:07.690245 1667 log.go:172] (0xc0000e6b00) (0xc000665180) Stream removed, broadcasting: 1\nI0404 17:41:07.690272 1667 log.go:172] (0xc0000e6b00) Go away received\nI0404 17:41:07.690616 1667 log.go:172] (0xc0000e6b00) (0xc000665180) Stream removed, broadcasting: 1\nI0404 17:41:07.690632 1667 log.go:172] (0xc0000e6b00) (0xc00094e000) Stream removed, broadcasting: 3\nI0404 17:41:07.690640 1667 log.go:172] (0xc0000e6b00) (0xc00094e0a0) Stream removed, broadcasting: 5\n" Apr 4 17:41:07.695: INFO: stdout: "" Apr 4 17:41:07.696: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-5171 execpodd22vt -- /bin/sh -x -c nc -zv -t -w 2 10.96.55.148 80' Apr 4 17:41:07.895: INFO: stderr: "I0404 17:41:07.828563 1690 log.go:172] (0xc000a3b3f0) (0xc000aa6780) Create stream\nI0404 17:41:07.828633 1690 log.go:172] (0xc000a3b3f0) (0xc000aa6780) Stream added, broadcasting: 1\nI0404 17:41:07.833069 1690 log.go:172] (0xc000a3b3f0) Reply frame received for 1\nI0404 17:41:07.833286 1690 log.go:172] (0xc000a3b3f0) (0xc00055d360) Create stream\nI0404 17:41:07.833310 1690 log.go:172] (0xc000a3b3f0) (0xc00055d360) Stream added, broadcasting: 3\nI0404 17:41:07.834437 1690 log.go:172] (0xc000a3b3f0) Reply frame received for 3\nI0404 17:41:07.834476 1690 log.go:172] (0xc000a3b3f0) (0xc0003fe960) Create stream\nI0404 17:41:07.834489 1690 log.go:172] (0xc000a3b3f0) (0xc0003fe960) Stream added, broadcasting: 5\nI0404 17:41:07.835418 1690 log.go:172] (0xc000a3b3f0) Reply frame received for 5\nI0404 17:41:07.888839 1690 log.go:172] (0xc000a3b3f0) Data frame received for 3\nI0404 17:41:07.888861 1690 log.go:172] (0xc00055d360) (3) Data frame handling\nI0404 17:41:07.888898 1690 log.go:172] (0xc000a3b3f0) Data frame received for 5\nI0404 17:41:07.888920 1690 log.go:172] (0xc0003fe960) (5) Data frame handling\nI0404 17:41:07.888941 1690 log.go:172] (0xc0003fe960) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.55.148 80\nConnection to 10.96.55.148 80 port [tcp/http] succeeded!\nI0404 17:41:07.888954 1690 log.go:172] (0xc000a3b3f0) Data frame received for 5\nI0404 17:41:07.888977 1690 log.go:172] (0xc0003fe960) (5) Data frame handling\nI0404 17:41:07.890472 1690 log.go:172] (0xc000a3b3f0) Data frame received for 1\nI0404 17:41:07.890495 1690 log.go:172] (0xc000aa6780) (1) Data frame handling\nI0404 17:41:07.890509 1690 log.go:172] (0xc000aa6780) (1) Data frame sent\nI0404 17:41:07.890518 1690 log.go:172] (0xc000a3b3f0) (0xc000aa6780) Stream removed, broadcasting: 1\nI0404 17:41:07.890771 1690 log.go:172] (0xc000a3b3f0) (0xc000aa6780) Stream removed, broadcasting: 1\nI0404 17:41:07.890802 1690 log.go:172] (0xc000a3b3f0) (0xc00055d360) Stream removed, broadcasting: 3\nI0404 17:41:07.890813 1690 log.go:172] (0xc000a3b3f0) (0xc0003fe960) Stream removed, broadcasting: 5\n" Apr 4 17:41:07.895: INFO: stdout: "" Apr 4 17:41:07.895: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 17:41:07.934: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5171" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:11.643 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":281,"completed":61,"skipped":972,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 17:41:07.943: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-map-a24bc492-af2c-4bae-8f8f-bf66de24cbcf STEP: Creating a pod to test consume configMaps Apr 4 17:41:08.013: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e350fd44-f144-435b-8f38-c75d56d3da4c" in namespace "projected-776" to be "Succeeded or Failed" Apr 4 17:41:08.030: INFO: Pod "pod-projected-configmaps-e350fd44-f144-435b-8f38-c75d56d3da4c": Phase="Pending", Reason="", readiness=false. Elapsed: 17.217252ms Apr 4 17:41:10.082: INFO: Pod "pod-projected-configmaps-e350fd44-f144-435b-8f38-c75d56d3da4c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069322359s Apr 4 17:41:12.160: INFO: Pod "pod-projected-configmaps-e350fd44-f144-435b-8f38-c75d56d3da4c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.146758687s STEP: Saw pod success Apr 4 17:41:12.160: INFO: Pod "pod-projected-configmaps-e350fd44-f144-435b-8f38-c75d56d3da4c" satisfied condition "Succeeded or Failed" Apr 4 17:41:12.162: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-e350fd44-f144-435b-8f38-c75d56d3da4c container projected-configmap-volume-test: STEP: delete the pod Apr 4 17:41:12.205: INFO: Waiting for pod pod-projected-configmaps-e350fd44-f144-435b-8f38-c75d56d3da4c to disappear Apr 4 17:41:12.208: INFO: Pod pod-projected-configmaps-e350fd44-f144-435b-8f38-c75d56d3da4c no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 17:41:12.208: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-776" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":281,"completed":62,"skipped":981,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 17:41:12.216: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir volume type on tmpfs Apr 4 17:41:12.347: INFO: Waiting up to 5m0s for pod "pod-9fa08046-8782-4953-a913-d3301950c912" in namespace "emptydir-5042" to be "Succeeded or Failed" Apr 4 17:41:12.383: INFO: Pod "pod-9fa08046-8782-4953-a913-d3301950c912": Phase="Pending", Reason="", readiness=false. Elapsed: 35.198807ms Apr 4 17:41:14.496: INFO: Pod "pod-9fa08046-8782-4953-a913-d3301950c912": Phase="Pending", Reason="", readiness=false. Elapsed: 2.148239968s Apr 4 17:41:16.500: INFO: Pod "pod-9fa08046-8782-4953-a913-d3301950c912": Phase="Pending", Reason="", readiness=false. Elapsed: 4.152734801s Apr 4 17:41:18.505: INFO: Pod "pod-9fa08046-8782-4953-a913-d3301950c912": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.157208668s STEP: Saw pod success Apr 4 17:41:18.505: INFO: Pod "pod-9fa08046-8782-4953-a913-d3301950c912" satisfied condition "Succeeded or Failed" Apr 4 17:41:18.508: INFO: Trying to get logs from node latest-worker pod pod-9fa08046-8782-4953-a913-d3301950c912 container test-container: STEP: delete the pod Apr 4 17:41:18.524: INFO: Waiting for pod pod-9fa08046-8782-4953-a913-d3301950c912 to disappear Apr 4 17:41:18.529: INFO: Pod pod-9fa08046-8782-4953-a913-d3301950c912 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 17:41:18.529: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5042" for this suite. • [SLOW TEST:6.320 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:43 volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":281,"completed":63,"skipped":1054,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 17:41:18.537: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Apr 4 17:43:18.640: INFO: Deleting pod "var-expansion-67214cf1-e957-4f9d-b4ae-b1f69bbf96e9" in namespace "var-expansion-5913" Apr 4 17:43:18.644: INFO: Wait up to 5m0s for pod "var-expansion-67214cf1-e957-4f9d-b4ae-b1f69bbf96e9" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 17:43:24.656: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-5913" for this suite. • [SLOW TEST:126.128 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance]","total":281,"completed":64,"skipped":1083,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 17:43:24.666: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 4 17:43:25.225: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 4 17:43:27.307: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721619005, loc:(*time.Location)(0x7bcb460)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721619005, loc:(*time.Location)(0x7bcb460)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721619005, loc:(*time.Location)(0x7bcb460)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721619005, loc:(*time.Location)(0x7bcb460)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 4 17:43:29.310: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721619005, loc:(*time.Location)(0x7bcb460)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721619005, loc:(*time.Location)(0x7bcb460)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721619005, loc:(*time.Location)(0x7bcb460)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721619005, loc:(*time.Location)(0x7bcb460)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 4 17:43:32.376: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 17:43:32.542: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4207" for this suite. STEP: Destroying namespace "webhook-4207-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.139 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":281,"completed":65,"skipped":1143,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 17:43:32.805: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name s-test-opt-del-f12fcf67-2651-4800-b47a-d1b9c54573b5 STEP: Creating secret with name s-test-opt-upd-4f454c79-e66d-4691-9d21-16d891aca929 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-f12fcf67-2651-4800-b47a-d1b9c54573b5 STEP: Updating secret s-test-opt-upd-4f454c79-e66d-4691-9d21-16d891aca929 STEP: Creating secret with name s-test-opt-create-a10ee8aa-6dd1-491e-81ee-86162572ded4 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 17:43:43.022: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4299" for this suite. • [SLOW TEST:10.224 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":281,"completed":66,"skipped":1158,"failed":0} S ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 17:43:43.030: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod test-webserver-5293b26a-a43d-4a25-b8fc-c6152344f895 in namespace container-probe-6832 Apr 4 17:43:47.162: INFO: Started pod test-webserver-5293b26a-a43d-4a25-b8fc-c6152344f895 in namespace container-probe-6832 STEP: checking the pod's current state and verifying that restartCount is present Apr 4 17:43:47.165: INFO: Initial restart count of pod test-webserver-5293b26a-a43d-4a25-b8fc-c6152344f895 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 17:47:48.184: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6832" for this suite. • [SLOW TEST:245.473 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":281,"completed":67,"skipped":1159,"failed":0} SSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 17:47:48.503: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:249 [BeforeEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1484 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: running the image docker.io/library/httpd:2.4.38-alpine Apr 4 17:47:48.855: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-8255' Apr 4 17:47:49.204: INFO: stderr: "" Apr 4 17:47:49.204: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created Apr 4 17:47:54.255: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-8255 -o json' Apr 4 17:47:54.348: INFO: stderr: "" Apr 4 17:47:54.348: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-04-04T17:47:49Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-8255\",\n \"resourceVersion\": \"5394707\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-8255/pods/e2e-test-httpd-pod\",\n \"uid\": \"5b1090f3-b295-43dd-9dc6-1173102f69d6\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-fwdbg\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"latest-worker2\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-fwdbg\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-fwdbg\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-04-04T17:47:49Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-04-04T17:47:51Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-04-04T17:47:51Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-04-04T17:47:49Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://0c289df85a8317cd44dfabd80df68dbef5b172614ffad00f42c242310f5141c2\",\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-04-04T17:47:51Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.12\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.1.210\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.1.210\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-04-04T17:47:49Z\"\n }\n}\n" STEP: replace the image in the pod Apr 4 17:47:54.348: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-8255' Apr 4 17:47:54.654: INFO: stderr: "" Apr 4 17:47:54.654: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29 [AfterEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1489 Apr 4 17:47:54.662: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-8255' Apr 4 17:48:03.000: INFO: stderr: "" Apr 4 17:48:03.000: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 17:48:03.000: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8255" for this suite. • [SLOW TEST:14.504 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1480 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":281,"completed":68,"skipped":1168,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 17:48:03.007: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91 Apr 4 17:48:03.219: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 4 17:48:03.227: INFO: Waiting for terminating namespaces to be deleted... Apr 4 17:48:03.228: INFO: Logging pods the kubelet thinks is on node latest-worker before test Apr 4 17:48:03.245: INFO: kindnet-vnjgh from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 4 17:48:03.245: INFO: Container kindnet-cni ready: true, restart count 0 Apr 4 17:48:03.245: INFO: kube-proxy-s9v6p from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 4 17:48:03.245: INFO: Container kube-proxy ready: true, restart count 0 Apr 4 17:48:03.245: INFO: Logging pods the kubelet thinks is on node latest-worker2 before test Apr 4 17:48:03.258: INFO: kindnet-zq6gp from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 4 17:48:03.258: INFO: Container kindnet-cni ready: true, restart count 0 Apr 4 17:48:03.258: INFO: kube-proxy-c5xlk from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 4 17:48:03.258: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-d5a80a48-bc31-4e1b-9552-5f941defc891 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-d5a80a48-bc31-4e1b-9552-5f941defc891 off the node latest-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-d5a80a48-bc31-4e1b-9552-5f941defc891 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 17:48:11.417: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-6311" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82 • [SLOW TEST:8.418 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":281,"completed":69,"skipped":1188,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 17:48:11.425: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Apr 4 17:48:11.466: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Apr 4 17:48:14.397: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3546 create -f -' Apr 4 17:48:17.880: INFO: stderr: "" Apr 4 17:48:17.880: INFO: stdout: "e2e-test-crd-publish-openapi-3150-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Apr 4 17:48:17.880: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3546 delete e2e-test-crd-publish-openapi-3150-crds test-cr' Apr 4 17:48:18.216: INFO: stderr: "" Apr 4 17:48:18.216: INFO: stdout: "e2e-test-crd-publish-openapi-3150-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" Apr 4 17:48:18.217: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3546 apply -f -' Apr 4 17:48:18.447: INFO: stderr: "" Apr 4 17:48:18.447: INFO: stdout: "e2e-test-crd-publish-openapi-3150-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Apr 4 17:48:18.448: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3546 delete e2e-test-crd-publish-openapi-3150-crds test-cr' Apr 4 17:48:18.548: INFO: stderr: "" Apr 4 17:48:18.548: INFO: stdout: "e2e-test-crd-publish-openapi-3150-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema Apr 4 17:48:18.548: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-3150-crds' Apr 4 17:48:18.779: INFO: stderr: "" Apr 4 17:48:18.779: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-3150-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 17:48:21.753: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-3546" for this suite. • [SLOW TEST:10.334 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":281,"completed":70,"skipped":1195,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 17:48:21.759: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Apr 4 17:48:28.781: INFO: 10 pods remaining Apr 4 17:48:28.781: INFO: 9 pods has nil DeletionTimestamp Apr 4 17:48:28.781: INFO: Apr 4 17:48:30.184: INFO: 0 pods remaining Apr 4 17:48:30.185: INFO: 0 pods has nil DeletionTimestamp Apr 4 17:48:30.185: INFO: STEP: Gathering metrics W0404 17:48:30.864623 7 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 4 17:48:30.864: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 17:48:30.864: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6763" for this suite. • [SLOW TEST:9.112 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":281,"completed":71,"skipped":1216,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 17:48:30.872: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Apr 4 17:48:37.611: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-5326 PodName:pod-sharedvolume-c8c5fd47-868f-4cce-b1a7-87a077d36082 ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 4 17:48:37.611: INFO: >>> kubeConfig: /root/.kube/config I0404 17:48:37.649299 7 log.go:172] (0xc0044568f0) (0xc002bfc640) Create stream I0404 17:48:37.649330 7 log.go:172] (0xc0044568f0) (0xc002bfc640) Stream added, broadcasting: 1 I0404 17:48:37.651370 7 log.go:172] (0xc0044568f0) Reply frame received for 1 I0404 17:48:37.651415 7 log.go:172] (0xc0044568f0) (0xc000b1efa0) Create stream I0404 17:48:37.651427 7 log.go:172] (0xc0044568f0) (0xc000b1efa0) Stream added, broadcasting: 3 I0404 17:48:37.652472 7 log.go:172] (0xc0044568f0) Reply frame received for 3 I0404 17:48:37.652516 7 log.go:172] (0xc0044568f0) (0xc0016a75e0) Create stream I0404 17:48:37.652532 7 log.go:172] (0xc0044568f0) (0xc0016a75e0) Stream added, broadcasting: 5 I0404 17:48:37.653950 7 log.go:172] (0xc0044568f0) Reply frame received for 5 I0404 17:48:37.713721 7 log.go:172] (0xc0044568f0) Data frame received for 5 I0404 17:48:37.713763 7 log.go:172] (0xc0016a75e0) (5) Data frame handling I0404 17:48:37.713787 7 log.go:172] (0xc0044568f0) Data frame received for 3 I0404 17:48:37.713808 7 log.go:172] (0xc000b1efa0) (3) Data frame handling I0404 17:48:37.713824 7 log.go:172] (0xc000b1efa0) (3) Data frame sent I0404 17:48:37.713844 7 log.go:172] (0xc0044568f0) Data frame received for 3 I0404 17:48:37.713855 7 log.go:172] (0xc000b1efa0) (3) Data frame handling I0404 17:48:37.715212 7 log.go:172] (0xc0044568f0) Data frame received for 1 I0404 17:48:37.715232 7 log.go:172] (0xc002bfc640) (1) Data frame handling I0404 17:48:37.715245 7 log.go:172] (0xc002bfc640) (1) Data frame sent I0404 17:48:37.715259 7 log.go:172] (0xc0044568f0) (0xc002bfc640) Stream removed, broadcasting: 1 I0404 17:48:37.715271 7 log.go:172] (0xc0044568f0) Go away received I0404 17:48:37.715566 7 log.go:172] (0xc0044568f0) (0xc002bfc640) Stream removed, broadcasting: 1 I0404 17:48:37.715610 7 log.go:172] (0xc0044568f0) (0xc000b1efa0) Stream removed, broadcasting: 3 I0404 17:48:37.715635 7 log.go:172] (0xc0044568f0) (0xc0016a75e0) Stream removed, broadcasting: 5 Apr 4 17:48:37.715: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 17:48:37.715: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5326" for this suite. • [SLOW TEST:6.852 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:43 pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":281,"completed":72,"skipped":1227,"failed":0} SSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 17:48:37.724: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service nodeport-service with the type=NodePort in namespace services-9204 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-9204 STEP: creating replication controller externalsvc in namespace services-9204 I0404 17:48:37.893379 7 runners.go:190] Created replication controller with name: externalsvc, namespace: services-9204, replica count: 2 I0404 17:48:40.943903 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0404 17:48:43.944145 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName Apr 4 17:48:43.995: INFO: Creating new exec pod Apr 4 17:48:48.011: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-9204 execpodjwr4l -- /bin/sh -x -c nslookup nodeport-service' Apr 4 17:48:48.208: INFO: stderr: "I0404 17:48:48.136909 1909 log.go:172] (0xc00095f6b0) (0xc0009588c0) Create stream\nI0404 17:48:48.136966 1909 log.go:172] (0xc00095f6b0) (0xc0009588c0) Stream added, broadcasting: 1\nI0404 17:48:48.141751 1909 log.go:172] (0xc00095f6b0) Reply frame received for 1\nI0404 17:48:48.141787 1909 log.go:172] (0xc00095f6b0) (0xc0005712c0) Create stream\nI0404 17:48:48.141797 1909 log.go:172] (0xc00095f6b0) (0xc0005712c0) Stream added, broadcasting: 3\nI0404 17:48:48.142623 1909 log.go:172] (0xc00095f6b0) Reply frame received for 3\nI0404 17:48:48.142654 1909 log.go:172] (0xc00095f6b0) (0xc0003e68c0) Create stream\nI0404 17:48:48.142662 1909 log.go:172] (0xc00095f6b0) (0xc0003e68c0) Stream added, broadcasting: 5\nI0404 17:48:48.143407 1909 log.go:172] (0xc00095f6b0) Reply frame received for 5\nI0404 17:48:48.192432 1909 log.go:172] (0xc00095f6b0) Data frame received for 5\nI0404 17:48:48.192464 1909 log.go:172] (0xc0003e68c0) (5) Data frame handling\nI0404 17:48:48.192484 1909 log.go:172] (0xc0003e68c0) (5) Data frame sent\n+ nslookup nodeport-service\nI0404 17:48:48.199172 1909 log.go:172] (0xc00095f6b0) Data frame received for 3\nI0404 17:48:48.199195 1909 log.go:172] (0xc0005712c0) (3) Data frame handling\nI0404 17:48:48.199208 1909 log.go:172] (0xc0005712c0) (3) Data frame sent\nI0404 17:48:48.200650 1909 log.go:172] (0xc00095f6b0) Data frame received for 3\nI0404 17:48:48.200681 1909 log.go:172] (0xc0005712c0) (3) Data frame handling\nI0404 17:48:48.200702 1909 log.go:172] (0xc0005712c0) (3) Data frame sent\nI0404 17:48:48.201099 1909 log.go:172] (0xc00095f6b0) Data frame received for 5\nI0404 17:48:48.201236 1909 log.go:172] (0xc0003e68c0) (5) Data frame handling\nI0404 17:48:48.201265 1909 log.go:172] (0xc00095f6b0) Data frame received for 3\nI0404 17:48:48.201278 1909 log.go:172] (0xc0005712c0) (3) Data frame handling\nI0404 17:48:48.203125 1909 log.go:172] (0xc00095f6b0) Data frame received for 1\nI0404 17:48:48.203146 1909 log.go:172] (0xc0009588c0) (1) Data frame handling\nI0404 17:48:48.203161 1909 log.go:172] (0xc0009588c0) (1) Data frame sent\nI0404 17:48:48.203174 1909 log.go:172] (0xc00095f6b0) (0xc0009588c0) Stream removed, broadcasting: 1\nI0404 17:48:48.203204 1909 log.go:172] (0xc00095f6b0) Go away received\nI0404 17:48:48.203518 1909 log.go:172] (0xc00095f6b0) (0xc0009588c0) Stream removed, broadcasting: 1\nI0404 17:48:48.203532 1909 log.go:172] (0xc00095f6b0) (0xc0005712c0) Stream removed, broadcasting: 3\nI0404 17:48:48.203541 1909 log.go:172] (0xc00095f6b0) (0xc0003e68c0) Stream removed, broadcasting: 5\n" Apr 4 17:48:48.208: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-9204.svc.cluster.local\tcanonical name = externalsvc.services-9204.svc.cluster.local.\nName:\texternalsvc.services-9204.svc.cluster.local\nAddress: 10.96.228.34\n\n" STEP: deleting ReplicationController externalsvc in namespace services-9204, will wait for the garbage collector to delete the pods Apr 4 17:48:48.273: INFO: Deleting ReplicationController externalsvc took: 11.94265ms Apr 4 17:48:48.373: INFO: Terminating ReplicationController externalsvc pods took: 100.228424ms Apr 4 17:49:03.126: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 17:49:03.144: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9204" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:25.430 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":281,"completed":73,"skipped":1236,"failed":0} SSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 17:49:03.154: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8269.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8269.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 4 17:49:09.321: INFO: DNS probes using dns-8269/dns-test-7c97c6d1-41bd-4c2f-8590-d962615830d7 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 17:49:09.426: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8269" for this suite. • [SLOW TEST:6.315 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":281,"completed":74,"skipped":1244,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 17:49:09.470: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Apr 4 17:49:09.822: INFO: The status of Pod test-webserver-4500b4a8-9cd9-47c9-87d2-08becbbf641a is Pending, waiting for it to be Running (with Ready = true) Apr 4 17:49:11.824: INFO: The status of Pod test-webserver-4500b4a8-9cd9-47c9-87d2-08becbbf641a is Pending, waiting for it to be Running (with Ready = true) Apr 4 17:49:13.825: INFO: The status of Pod test-webserver-4500b4a8-9cd9-47c9-87d2-08becbbf641a is Pending, waiting for it to be Running (with Ready = true) Apr 4 17:49:15.828: INFO: The status of Pod test-webserver-4500b4a8-9cd9-47c9-87d2-08becbbf641a is Running (Ready = false) Apr 4 17:49:17.826: INFO: The status of Pod test-webserver-4500b4a8-9cd9-47c9-87d2-08becbbf641a is Running (Ready = false) Apr 4 17:49:19.825: INFO: The status of Pod test-webserver-4500b4a8-9cd9-47c9-87d2-08becbbf641a is Running (Ready = false) Apr 4 17:49:21.826: INFO: The status of Pod test-webserver-4500b4a8-9cd9-47c9-87d2-08becbbf641a is Running (Ready = false) Apr 4 17:49:23.826: INFO: The status of Pod test-webserver-4500b4a8-9cd9-47c9-87d2-08becbbf641a is Running (Ready = false) Apr 4 17:49:25.843: INFO: The status of Pod test-webserver-4500b4a8-9cd9-47c9-87d2-08becbbf641a is Running (Ready = false) Apr 4 17:49:27.825: INFO: The status of Pod test-webserver-4500b4a8-9cd9-47c9-87d2-08becbbf641a is Running (Ready = false) Apr 4 17:49:29.826: INFO: The status of Pod test-webserver-4500b4a8-9cd9-47c9-87d2-08becbbf641a is Running (Ready = true) Apr 4 17:49:29.829: INFO: Container started at 2020-04-04 17:49:13 +0000 UTC, pod became ready at 2020-04-04 17:49:29 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 17:49:29.829: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2182" for this suite. • [SLOW TEST:20.367 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":281,"completed":75,"skipped":1321,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 17:49:29.838: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name cm-test-opt-del-63953ea8-092d-461b-a262-28d6f85f9b1f STEP: Creating configMap with name cm-test-opt-upd-a5c64c57-7f91-466e-a4eb-6c194a076c83 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-63953ea8-092d-461b-a262-28d6f85f9b1f STEP: Updating configmap cm-test-opt-upd-a5c64c57-7f91-466e-a4eb-6c194a076c83 STEP: Creating configMap with name cm-test-opt-create-383c20ce-fc08-44b2-b88d-aa5ffe4bd1ad STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 17:50:42.389: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3073" for this suite. • [SLOW TEST:72.560 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":281,"completed":76,"skipped":1355,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 17:50:42.399: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-map-fc96cd78-401a-4200-9d8f-80847bca7b8a STEP: Creating a pod to test consume configMaps Apr 4 17:50:42.527: INFO: Waiting up to 5m0s for pod "pod-configmaps-15e2b978-9d68-4b47-af3a-0ef35c213b29" in namespace "configmap-1047" to be "Succeeded or Failed" Apr 4 17:50:42.578: INFO: Pod "pod-configmaps-15e2b978-9d68-4b47-af3a-0ef35c213b29": Phase="Pending", Reason="", readiness=false. Elapsed: 50.515643ms Apr 4 17:50:44.636: INFO: Pod "pod-configmaps-15e2b978-9d68-4b47-af3a-0ef35c213b29": Phase="Pending", Reason="", readiness=false. Elapsed: 2.108706496s Apr 4 17:50:46.640: INFO: Pod "pod-configmaps-15e2b978-9d68-4b47-af3a-0ef35c213b29": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.113005997s STEP: Saw pod success Apr 4 17:50:46.640: INFO: Pod "pod-configmaps-15e2b978-9d68-4b47-af3a-0ef35c213b29" satisfied condition "Succeeded or Failed" Apr 4 17:50:46.643: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-15e2b978-9d68-4b47-af3a-0ef35c213b29 container configmap-volume-test: STEP: delete the pod Apr 4 17:50:46.692: INFO: Waiting for pod pod-configmaps-15e2b978-9d68-4b47-af3a-0ef35c213b29 to disappear Apr 4 17:50:46.703: INFO: Pod pod-configmaps-15e2b978-9d68-4b47-af3a-0ef35c213b29 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 17:50:46.703: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1047" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":281,"completed":77,"skipped":1403,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 17:50:46.711: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Apr 4 17:50:46.784: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-8945 /api/v1/namespaces/watch-8945/configmaps/e2e-watch-test-watch-closed 6c058563-91a1-4097-9821-aab7be260854 5395697 0 2020-04-04 17:50:46 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Apr 4 17:50:46.784: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-8945 /api/v1/namespaces/watch-8945/configmaps/e2e-watch-test-watch-closed 6c058563-91a1-4097-9821-aab7be260854 5395698 0 2020-04-04 17:50:46 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Apr 4 17:50:46.794: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-8945 /api/v1/namespaces/watch-8945/configmaps/e2e-watch-test-watch-closed 6c058563-91a1-4097-9821-aab7be260854 5395699 0 2020-04-04 17:50:46 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Apr 4 17:50:46.794: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-8945 /api/v1/namespaces/watch-8945/configmaps/e2e-watch-test-watch-closed 6c058563-91a1-4097-9821-aab7be260854 5395700 0 2020-04-04 17:50:46 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 17:50:46.794: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-8945" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":281,"completed":78,"skipped":1432,"failed":0} SSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 17:50:46.802: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name secret-emptykey-test-6eb3d297-f95a-4c75-aef6-038af7a54ac8 [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 17:50:46.841: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9789" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":281,"completed":79,"skipped":1436,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 17:50:46.851: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap configmap-1151/configmap-test-b52e0df1-95e3-4f9c-9a52-0789dab4a777 STEP: Creating a pod to test consume configMaps Apr 4 17:50:46.920: INFO: Waiting up to 5m0s for pod "pod-configmaps-1e09ff4b-edd1-49b1-9cb9-455085e5dbb2" in namespace "configmap-1151" to be "Succeeded or Failed" Apr 4 17:50:46.949: INFO: Pod "pod-configmaps-1e09ff4b-edd1-49b1-9cb9-455085e5dbb2": Phase="Pending", Reason="", readiness=false. Elapsed: 28.180771ms Apr 4 17:50:48.956: INFO: Pod "pod-configmaps-1e09ff4b-edd1-49b1-9cb9-455085e5dbb2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035166347s Apr 4 17:50:51.252: INFO: Pod "pod-configmaps-1e09ff4b-edd1-49b1-9cb9-455085e5dbb2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.331359656s Apr 4 17:50:53.439: INFO: Pod "pod-configmaps-1e09ff4b-edd1-49b1-9cb9-455085e5dbb2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.51894179s STEP: Saw pod success Apr 4 17:50:53.439: INFO: Pod "pod-configmaps-1e09ff4b-edd1-49b1-9cb9-455085e5dbb2" satisfied condition "Succeeded or Failed" Apr 4 17:50:53.442: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-1e09ff4b-edd1-49b1-9cb9-455085e5dbb2 container env-test: STEP: delete the pod Apr 4 17:50:53.575: INFO: Waiting for pod pod-configmaps-1e09ff4b-edd1-49b1-9cb9-455085e5dbb2 to disappear Apr 4 17:50:53.584: INFO: Pod pod-configmaps-1e09ff4b-edd1-49b1-9cb9-455085e5dbb2 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 17:50:53.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1151" for this suite. • [SLOW TEST:6.739 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:34 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":281,"completed":80,"skipped":1452,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 17:50:53.591: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-map-3be92cfa-3533-4d4b-ab0b-cd7fbc506608 STEP: Creating a pod to test consume configMaps Apr 4 17:50:53.821: INFO: Waiting up to 5m0s for pod "pod-configmaps-8283ce80-e17b-440f-af8c-561ca14dd3a3" in namespace "configmap-9148" to be "Succeeded or Failed" Apr 4 17:50:53.824: INFO: Pod "pod-configmaps-8283ce80-e17b-440f-af8c-561ca14dd3a3": Phase="Pending", Reason="", readiness=false. Elapsed: 3.088514ms Apr 4 17:50:55.836: INFO: Pod "pod-configmaps-8283ce80-e17b-440f-af8c-561ca14dd3a3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014544381s Apr 4 17:50:57.840: INFO: Pod "pod-configmaps-8283ce80-e17b-440f-af8c-561ca14dd3a3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018657431s STEP: Saw pod success Apr 4 17:50:57.840: INFO: Pod "pod-configmaps-8283ce80-e17b-440f-af8c-561ca14dd3a3" satisfied condition "Succeeded or Failed" Apr 4 17:50:57.842: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-8283ce80-e17b-440f-af8c-561ca14dd3a3 container configmap-volume-test: STEP: delete the pod Apr 4 17:50:57.872: INFO: Waiting for pod pod-configmaps-8283ce80-e17b-440f-af8c-561ca14dd3a3 to disappear Apr 4 17:50:57.883: INFO: Pod pod-configmaps-8283ce80-e17b-440f-af8c-561ca14dd3a3 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 17:50:57.883: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9148" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":281,"completed":81,"skipped":1468,"failed":0} SSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 17:50:57.892: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on node default medium Apr 4 17:50:57.951: INFO: Waiting up to 5m0s for pod "pod-cb78f8c6-cd80-4bd8-8393-89bd6bf015e6" in namespace "emptydir-3078" to be "Succeeded or Failed" Apr 4 17:50:57.965: INFO: Pod "pod-cb78f8c6-cd80-4bd8-8393-89bd6bf015e6": Phase="Pending", Reason="", readiness=false. Elapsed: 13.828915ms Apr 4 17:50:59.971: INFO: Pod "pod-cb78f8c6-cd80-4bd8-8393-89bd6bf015e6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019873266s Apr 4 17:51:01.975: INFO: Pod "pod-cb78f8c6-cd80-4bd8-8393-89bd6bf015e6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024139716s STEP: Saw pod success Apr 4 17:51:01.975: INFO: Pod "pod-cb78f8c6-cd80-4bd8-8393-89bd6bf015e6" satisfied condition "Succeeded or Failed" Apr 4 17:51:01.978: INFO: Trying to get logs from node latest-worker2 pod pod-cb78f8c6-cd80-4bd8-8393-89bd6bf015e6 container test-container: STEP: delete the pod Apr 4 17:51:01.998: INFO: Waiting for pod pod-cb78f8c6-cd80-4bd8-8393-89bd6bf015e6 to disappear Apr 4 17:51:02.002: INFO: Pod pod-cb78f8c6-cd80-4bd8-8393-89bd6bf015e6 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 17:51:02.003: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3078" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":281,"completed":82,"skipped":1473,"failed":0} S ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 17:51:02.012: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-map-28160da1-689f-43a7-bc33-96478277c4c7 STEP: Creating a pod to test consume secrets Apr 4 17:51:02.074: INFO: Waiting up to 5m0s for pod "pod-secrets-c018a0af-7b5f-4d92-ad46-88da325b8bda" in namespace "secrets-3657" to be "Succeeded or Failed" Apr 4 17:51:02.115: INFO: Pod "pod-secrets-c018a0af-7b5f-4d92-ad46-88da325b8bda": Phase="Pending", Reason="", readiness=false. Elapsed: 41.195754ms Apr 4 17:51:04.119: INFO: Pod "pod-secrets-c018a0af-7b5f-4d92-ad46-88da325b8bda": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045518647s Apr 4 17:51:06.123: INFO: Pod "pod-secrets-c018a0af-7b5f-4d92-ad46-88da325b8bda": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.04961753s STEP: Saw pod success Apr 4 17:51:06.123: INFO: Pod "pod-secrets-c018a0af-7b5f-4d92-ad46-88da325b8bda" satisfied condition "Succeeded or Failed" Apr 4 17:51:06.127: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-c018a0af-7b5f-4d92-ad46-88da325b8bda container secret-volume-test: STEP: delete the pod Apr 4 17:51:06.188: INFO: Waiting for pod pod-secrets-c018a0af-7b5f-4d92-ad46-88da325b8bda to disappear Apr 4 17:51:06.194: INFO: Pod pod-secrets-c018a0af-7b5f-4d92-ad46-88da325b8bda no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 17:51:06.194: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3657" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":281,"completed":83,"skipped":1474,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 17:51:06.201: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars Apr 4 17:51:06.268: INFO: Waiting up to 5m0s for pod "downward-api-a69fc610-0af1-46b4-9246-60f4d7b50191" in namespace "downward-api-7720" to be "Succeeded or Failed" Apr 4 17:51:06.279: INFO: Pod "downward-api-a69fc610-0af1-46b4-9246-60f4d7b50191": Phase="Pending", Reason="", readiness=false. Elapsed: 11.203243ms Apr 4 17:51:08.283: INFO: Pod "downward-api-a69fc610-0af1-46b4-9246-60f4d7b50191": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014600055s Apr 4 17:51:10.287: INFO: Pod "downward-api-a69fc610-0af1-46b4-9246-60f4d7b50191": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018542674s STEP: Saw pod success Apr 4 17:51:10.287: INFO: Pod "downward-api-a69fc610-0af1-46b4-9246-60f4d7b50191" satisfied condition "Succeeded or Failed" Apr 4 17:51:10.289: INFO: Trying to get logs from node latest-worker2 pod downward-api-a69fc610-0af1-46b4-9246-60f4d7b50191 container dapi-container: STEP: delete the pod Apr 4 17:51:10.310: INFO: Waiting for pod downward-api-a69fc610-0af1-46b4-9246-60f4d7b50191 to disappear Apr 4 17:51:10.339: INFO: Pod downward-api-a69fc610-0af1-46b4-9246-60f4d7b50191 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 17:51:10.339: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7720" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":281,"completed":84,"skipped":1515,"failed":0} SSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 17:51:10.346: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:249 [BeforeEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1236 STEP: creating the pod Apr 4 17:51:10.544: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9541' Apr 4 17:51:10.934: INFO: stderr: "" Apr 4 17:51:10.934: INFO: stdout: "pod/pause created\n" Apr 4 17:51:10.934: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Apr 4 17:51:10.934: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-9541" to be "running and ready" Apr 4 17:51:10.944: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 9.843881ms Apr 4 17:51:12.956: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022595721s Apr 4 17:51:14.961: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.027562413s Apr 4 17:51:14.962: INFO: Pod "pause" satisfied condition "running and ready" Apr 4 17:51:14.962: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: adding the label testing-label with value testing-label-value to a pod Apr 4 17:51:14.962: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-9541' Apr 4 17:51:15.128: INFO: stderr: "" Apr 4 17:51:15.128: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Apr 4 17:51:15.128: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-9541' Apr 4 17:51:15.253: INFO: stderr: "" Apr 4 17:51:15.253: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s testing-label-value\n" STEP: removing the label testing-label of a pod Apr 4 17:51:15.253: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-9541' Apr 4 17:51:15.356: INFO: stderr: "" Apr 4 17:51:15.356: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Apr 4 17:51:15.356: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-9541' Apr 4 17:51:15.440: INFO: stderr: "" Apr 4 17:51:15.440: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s \n" [AfterEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1243 STEP: using delete to clean up resources Apr 4 17:51:15.440: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9541' Apr 4 17:51:15.593: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 4 17:51:15.593: INFO: stdout: "pod \"pause\" force deleted\n" Apr 4 17:51:15.593: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-9541' Apr 4 17:51:15.866: INFO: stderr: "No resources found in kubectl-9541 namespace.\n" Apr 4 17:51:15.866: INFO: stdout: "" Apr 4 17:51:15.866: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-9541 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 4 17:51:15.957: INFO: stderr: "" Apr 4 17:51:15.957: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 17:51:15.957: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9541" for this suite. • [SLOW TEST:5.642 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1233 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":281,"completed":85,"skipped":1522,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 17:51:15.988: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap that has name configmap-test-emptyKey-646acf90-9df2-40fa-b0d6-fe3d6ba68dec [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 17:51:16.036: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9907" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":281,"completed":86,"skipped":1536,"failed":0} SS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 17:51:16.056: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Apr 4 17:51:26.526: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 4 17:51:26.531: INFO: Pod pod-with-prestop-exec-hook still exists Apr 4 17:51:28.531: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 4 17:51:28.594: INFO: Pod pod-with-prestop-exec-hook still exists Apr 4 17:51:30.531: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 4 17:51:30.535: INFO: Pod pod-with-prestop-exec-hook still exists Apr 4 17:51:32.531: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 4 17:51:32.540: INFO: Pod pod-with-prestop-exec-hook still exists Apr 4 17:51:34.531: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 4 17:51:34.534: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 17:51:34.542: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-9691" for this suite. • [SLOW TEST:18.493 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":281,"completed":87,"skipped":1538,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 17:51:34.550: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod liveness-4e1da2da-3a1b-45d2-8ab4-cd908a02bc83 in namespace container-probe-4339 Apr 4 17:51:38.667: INFO: Started pod liveness-4e1da2da-3a1b-45d2-8ab4-cd908a02bc83 in namespace container-probe-4339 STEP: checking the pod's current state and verifying that restartCount is present Apr 4 17:51:38.670: INFO: Initial restart count of pod liveness-4e1da2da-3a1b-45d2-8ab4-cd908a02bc83 is 0 Apr 4 17:51:55.457: INFO: Restart count of pod container-probe-4339/liveness-4e1da2da-3a1b-45d2-8ab4-cd908a02bc83 is now 1 (16.786832315s elapsed) Apr 4 17:52:13.490: INFO: Restart count of pod container-probe-4339/liveness-4e1da2da-3a1b-45d2-8ab4-cd908a02bc83 is now 2 (34.819611123s elapsed) Apr 4 17:52:33.664: INFO: Restart count of pod container-probe-4339/liveness-4e1da2da-3a1b-45d2-8ab4-cd908a02bc83 is now 3 (54.993317178s elapsed) Apr 4 17:52:53.705: INFO: Restart count of pod container-probe-4339/liveness-4e1da2da-3a1b-45d2-8ab4-cd908a02bc83 is now 4 (1m15.034901831s elapsed) Apr 4 17:54:03.847: INFO: Restart count of pod container-probe-4339/liveness-4e1da2da-3a1b-45d2-8ab4-cd908a02bc83 is now 5 (2m25.177057076s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 17:54:03.857: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4339" for this suite. • [SLOW TEST:149.316 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":281,"completed":88,"skipped":1565,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 17:54:03.867: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: set up a multi version CRD Apr 4 17:54:03.930: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 17:54:18.491: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2026" for this suite. • [SLOW TEST:14.632 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":281,"completed":89,"skipped":1573,"failed":0} SSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 17:54:18.499: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-map-372e52af-2964-4ef0-b375-ae0d8354663f STEP: Creating a pod to test consume secrets Apr 4 17:54:18.588: INFO: Waiting up to 5m0s for pod "pod-secrets-1dda4215-02ec-4c0a-8683-9dbdde78afa4" in namespace "secrets-6903" to be "Succeeded or Failed" Apr 4 17:54:18.608: INFO: Pod "pod-secrets-1dda4215-02ec-4c0a-8683-9dbdde78afa4": Phase="Pending", Reason="", readiness=false. Elapsed: 19.389675ms Apr 4 17:54:20.652: INFO: Pod "pod-secrets-1dda4215-02ec-4c0a-8683-9dbdde78afa4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063311787s Apr 4 17:54:22.656: INFO: Pod "pod-secrets-1dda4215-02ec-4c0a-8683-9dbdde78afa4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.067560035s STEP: Saw pod success Apr 4 17:54:22.656: INFO: Pod "pod-secrets-1dda4215-02ec-4c0a-8683-9dbdde78afa4" satisfied condition "Succeeded or Failed" Apr 4 17:54:22.659: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-1dda4215-02ec-4c0a-8683-9dbdde78afa4 container secret-volume-test: STEP: delete the pod Apr 4 17:54:22.725: INFO: Waiting for pod pod-secrets-1dda4215-02ec-4c0a-8683-9dbdde78afa4 to disappear Apr 4 17:54:22.738: INFO: Pod pod-secrets-1dda4215-02ec-4c0a-8683-9dbdde78afa4 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 17:54:22.738: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6903" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":281,"completed":90,"skipped":1576,"failed":0} S ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 17:54:22.746: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service endpoint-test2 in namespace services-6674 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6674 to expose endpoints map[] Apr 4 17:54:22.852: INFO: Get endpoints failed (4.590818ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found Apr 4 17:54:23.856: INFO: successfully validated that service endpoint-test2 in namespace services-6674 exposes endpoints map[] (1.008649459s elapsed) STEP: Creating pod pod1 in namespace services-6674 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6674 to expose endpoints map[pod1:[80]] Apr 4 17:54:26.909: INFO: successfully validated that service endpoint-test2 in namespace services-6674 exposes endpoints map[pod1:[80]] (3.040262407s elapsed) STEP: Creating pod pod2 in namespace services-6674 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6674 to expose endpoints map[pod1:[80] pod2:[80]] Apr 4 17:54:30.038: INFO: successfully validated that service endpoint-test2 in namespace services-6674 exposes endpoints map[pod1:[80] pod2:[80]] (3.124073003s elapsed) STEP: Deleting pod pod1 in namespace services-6674 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6674 to expose endpoints map[pod2:[80]] Apr 4 17:54:31.074: INFO: successfully validated that service endpoint-test2 in namespace services-6674 exposes endpoints map[pod2:[80]] (1.031346359s elapsed) STEP: Deleting pod pod2 in namespace services-6674 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6674 to expose endpoints map[] Apr 4 17:54:32.087: INFO: successfully validated that service endpoint-test2 in namespace services-6674 exposes endpoints map[] (1.008092146s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 17:54:32.109: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6674" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:9.370 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":281,"completed":91,"skipped":1577,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 17:54:32.117: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on tmpfs Apr 4 17:54:32.290: INFO: Waiting up to 5m0s for pod "pod-d7388961-24f3-4534-9ea2-faa918f9ab64" in namespace "emptydir-3287" to be "Succeeded or Failed" Apr 4 17:54:32.293: INFO: Pod "pod-d7388961-24f3-4534-9ea2-faa918f9ab64": Phase="Pending", Reason="", readiness=false. Elapsed: 3.637542ms Apr 4 17:54:34.327: INFO: Pod "pod-d7388961-24f3-4534-9ea2-faa918f9ab64": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037229832s Apr 4 17:54:36.332: INFO: Pod "pod-d7388961-24f3-4534-9ea2-faa918f9ab64": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.041714187s STEP: Saw pod success Apr 4 17:54:36.332: INFO: Pod "pod-d7388961-24f3-4534-9ea2-faa918f9ab64" satisfied condition "Succeeded or Failed" Apr 4 17:54:36.335: INFO: Trying to get logs from node latest-worker2 pod pod-d7388961-24f3-4534-9ea2-faa918f9ab64 container test-container: STEP: delete the pod Apr 4 17:54:36.355: INFO: Waiting for pod pod-d7388961-24f3-4534-9ea2-faa918f9ab64 to disappear Apr 4 17:54:36.359: INFO: Pod pod-d7388961-24f3-4534-9ea2-faa918f9ab64 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 17:54:36.359: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3287" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":281,"completed":92,"skipped":1615,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 17:54:36.367: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:180 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Apr 4 17:54:36.428: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 17:54:40.474: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6864" for this suite. •{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":281,"completed":93,"skipped":1638,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 17:54:40.482: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-7afb6a66-c54a-4d84-872b-259cede4e8f5 STEP: Creating a pod to test consume secrets Apr 4 17:54:40.585: INFO: Waiting up to 5m0s for pod "pod-secrets-988c9e34-d607-4751-a28e-33718c3b4807" in namespace "secrets-4118" to be "Succeeded or Failed" Apr 4 17:54:40.615: INFO: Pod "pod-secrets-988c9e34-d607-4751-a28e-33718c3b4807": Phase="Pending", Reason="", readiness=false. Elapsed: 29.903651ms Apr 4 17:54:42.620: INFO: Pod "pod-secrets-988c9e34-d607-4751-a28e-33718c3b4807": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035346307s Apr 4 17:54:44.625: INFO: Pod "pod-secrets-988c9e34-d607-4751-a28e-33718c3b4807": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.039579081s STEP: Saw pod success Apr 4 17:54:44.625: INFO: Pod "pod-secrets-988c9e34-d607-4751-a28e-33718c3b4807" satisfied condition "Succeeded or Failed" Apr 4 17:54:44.628: INFO: Trying to get logs from node latest-worker pod pod-secrets-988c9e34-d607-4751-a28e-33718c3b4807 container secret-volume-test: STEP: delete the pod Apr 4 17:54:44.658: INFO: Waiting for pod pod-secrets-988c9e34-d607-4751-a28e-33718c3b4807 to disappear Apr 4 17:54:44.674: INFO: Pod pod-secrets-988c9e34-d607-4751-a28e-33718c3b4807 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 17:54:44.674: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4118" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":281,"completed":94,"skipped":1651,"failed":0} SSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 17:54:44.682: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod Apr 4 17:54:44.725: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 17:54:51.209: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-3958" for this suite. • [SLOW TEST:6.582 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":281,"completed":95,"skipped":1658,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 17:54:51.264: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars Apr 4 17:54:51.327: INFO: Waiting up to 5m0s for pod "downward-api-ac6d861e-29a8-4dc1-bcb7-37f686df9eca" in namespace "downward-api-8681" to be "Succeeded or Failed" Apr 4 17:54:51.330: INFO: Pod "downward-api-ac6d861e-29a8-4dc1-bcb7-37f686df9eca": Phase="Pending", Reason="", readiness=false. Elapsed: 3.161845ms Apr 4 17:54:53.334: INFO: Pod "downward-api-ac6d861e-29a8-4dc1-bcb7-37f686df9eca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007383429s Apr 4 17:54:55.338: INFO: Pod "downward-api-ac6d861e-29a8-4dc1-bcb7-37f686df9eca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011453592s STEP: Saw pod success Apr 4 17:54:55.338: INFO: Pod "downward-api-ac6d861e-29a8-4dc1-bcb7-37f686df9eca" satisfied condition "Succeeded or Failed" Apr 4 17:54:55.342: INFO: Trying to get logs from node latest-worker2 pod downward-api-ac6d861e-29a8-4dc1-bcb7-37f686df9eca container dapi-container: STEP: delete the pod Apr 4 17:54:55.399: INFO: Waiting for pod downward-api-ac6d861e-29a8-4dc1-bcb7-37f686df9eca to disappear Apr 4 17:54:55.414: INFO: Pod downward-api-ac6d861e-29a8-4dc1-bcb7-37f686df9eca no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 17:54:55.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8681" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":281,"completed":96,"skipped":1680,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 17:54:55.424: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test env composition Apr 4 17:54:55.536: INFO: Waiting up to 5m0s for pod "var-expansion-212f1995-b745-4383-bb0c-c5fb55f7948a" in namespace "var-expansion-5994" to be "Succeeded or Failed" Apr 4 17:54:55.609: INFO: Pod "var-expansion-212f1995-b745-4383-bb0c-c5fb55f7948a": Phase="Pending", Reason="", readiness=false. Elapsed: 72.585161ms Apr 4 17:54:57.612: INFO: Pod "var-expansion-212f1995-b745-4383-bb0c-c5fb55f7948a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.076334032s Apr 4 17:54:59.616: INFO: Pod "var-expansion-212f1995-b745-4383-bb0c-c5fb55f7948a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.080496072s STEP: Saw pod success Apr 4 17:54:59.616: INFO: Pod "var-expansion-212f1995-b745-4383-bb0c-c5fb55f7948a" satisfied condition "Succeeded or Failed" Apr 4 17:54:59.620: INFO: Trying to get logs from node latest-worker2 pod var-expansion-212f1995-b745-4383-bb0c-c5fb55f7948a container dapi-container: STEP: delete the pod Apr 4 17:54:59.657: INFO: Waiting for pod var-expansion-212f1995-b745-4383-bb0c-c5fb55f7948a to disappear Apr 4 17:54:59.662: INFO: Pod var-expansion-212f1995-b745-4383-bb0c-c5fb55f7948a no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 17:54:59.662: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-5994" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":281,"completed":97,"skipped":1697,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 17:54:59.668: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Apr 4 17:54:59.766: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e5edf49a-118e-4c00-a521-8afd28b53927" in namespace "projected-4033" to be "Succeeded or Failed" Apr 4 17:54:59.783: INFO: Pod "downwardapi-volume-e5edf49a-118e-4c00-a521-8afd28b53927": Phase="Pending", Reason="", readiness=false. Elapsed: 17.174785ms Apr 4 17:55:01.787: INFO: Pod "downwardapi-volume-e5edf49a-118e-4c00-a521-8afd28b53927": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02123321s Apr 4 17:55:03.791: INFO: Pod "downwardapi-volume-e5edf49a-118e-4c00-a521-8afd28b53927": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025470626s STEP: Saw pod success Apr 4 17:55:03.791: INFO: Pod "downwardapi-volume-e5edf49a-118e-4c00-a521-8afd28b53927" satisfied condition "Succeeded or Failed" Apr 4 17:55:03.795: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-e5edf49a-118e-4c00-a521-8afd28b53927 container client-container: STEP: delete the pod Apr 4 17:55:03.814: INFO: Waiting for pod downwardapi-volume-e5edf49a-118e-4c00-a521-8afd28b53927 to disappear Apr 4 17:55:03.818: INFO: Pod downwardapi-volume-e5edf49a-118e-4c00-a521-8afd28b53927 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 17:55:03.818: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4033" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":281,"completed":98,"skipped":1705,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 17:55:03.825: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 4 17:55:04.399: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 4 17:55:06.408: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721619704, loc:(*time.Location)(0x7bcb460)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721619704, loc:(*time.Location)(0x7bcb460)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721619704, loc:(*time.Location)(0x7bcb460)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721619704, loc:(*time.Location)(0x7bcb460)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 4 17:55:09.440: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 17:55:19.621: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6422" for this suite. STEP: Destroying namespace "webhook-6422-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:15.911 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":281,"completed":99,"skipped":1732,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 17:55:19.738: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 4 17:55:22.885: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 17:55:23.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-6766" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":281,"completed":100,"skipped":1751,"failed":0} SSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 17:55:23.148: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on node default medium Apr 4 17:55:23.228: INFO: Waiting up to 5m0s for pod "pod-a8ec3639-22e3-45ed-9463-fc0acccbb1fb" in namespace "emptydir-4867" to be "Succeeded or Failed" Apr 4 17:55:23.304: INFO: Pod "pod-a8ec3639-22e3-45ed-9463-fc0acccbb1fb": Phase="Pending", Reason="", readiness=false. Elapsed: 76.306735ms Apr 4 17:55:25.308: INFO: Pod "pod-a8ec3639-22e3-45ed-9463-fc0acccbb1fb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.080688396s Apr 4 17:55:27.312: INFO: Pod "pod-a8ec3639-22e3-45ed-9463-fc0acccbb1fb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.084679475s STEP: Saw pod success Apr 4 17:55:27.313: INFO: Pod "pod-a8ec3639-22e3-45ed-9463-fc0acccbb1fb" satisfied condition "Succeeded or Failed" Apr 4 17:55:27.316: INFO: Trying to get logs from node latest-worker2 pod pod-a8ec3639-22e3-45ed-9463-fc0acccbb1fb container test-container: STEP: delete the pod Apr 4 17:55:27.335: INFO: Waiting for pod pod-a8ec3639-22e3-45ed-9463-fc0acccbb1fb to disappear Apr 4 17:55:27.363: INFO: Pod pod-a8ec3639-22e3-45ed-9463-fc0acccbb1fb no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 17:55:27.363: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4867" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":281,"completed":101,"skipped":1756,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 17:55:27.372: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0404 17:55:39.986849 7 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 4 17:55:39.986: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 17:55:39.986: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8210" for this suite. • [SLOW TEST:12.622 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":281,"completed":102,"skipped":1789,"failed":0} SSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 17:55:39.994: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-6699 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 4 17:55:40.062: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Apr 4 17:55:40.119: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 4 17:55:42.160: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 4 17:55:44.123: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 4 17:55:46.137: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 4 17:55:48.123: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 4 17:55:50.123: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 4 17:55:52.123: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 4 17:55:54.123: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 4 17:55:56.123: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 4 17:55:58.123: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 4 17:56:00.123: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 4 17:56:02.123: INFO: The status of Pod netserver-0 is Running (Ready = true) Apr 4 17:56:02.128: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Apr 4 17:56:06.173: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.247:8080/dial?request=hostname&protocol=udp&host=10.244.2.173&port=8081&tries=1'] Namespace:pod-network-test-6699 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 4 17:56:06.173: INFO: >>> kubeConfig: /root/.kube/config I0404 17:56:06.214165 7 log.go:172] (0xc001bf13f0) (0xc0011a9220) Create stream I0404 17:56:06.214200 7 log.go:172] (0xc001bf13f0) (0xc0011a9220) Stream added, broadcasting: 1 I0404 17:56:06.216815 7 log.go:172] (0xc001bf13f0) Reply frame received for 1 I0404 17:56:06.216889 7 log.go:172] (0xc001bf13f0) (0xc001e59cc0) Create stream I0404 17:56:06.216916 7 log.go:172] (0xc001bf13f0) (0xc001e59cc0) Stream added, broadcasting: 3 I0404 17:56:06.218136 7 log.go:172] (0xc001bf13f0) Reply frame received for 3 I0404 17:56:06.218159 7 log.go:172] (0xc001bf13f0) (0xc0011a92c0) Create stream I0404 17:56:06.218165 7 log.go:172] (0xc001bf13f0) (0xc0011a92c0) Stream added, broadcasting: 5 I0404 17:56:06.219096 7 log.go:172] (0xc001bf13f0) Reply frame received for 5 I0404 17:56:06.304709 7 log.go:172] (0xc001bf13f0) Data frame received for 3 I0404 17:56:06.304768 7 log.go:172] (0xc001e59cc0) (3) Data frame handling I0404 17:56:06.304801 7 log.go:172] (0xc001e59cc0) (3) Data frame sent I0404 17:56:06.305843 7 log.go:172] (0xc001bf13f0) Data frame received for 3 I0404 17:56:06.305868 7 log.go:172] (0xc001e59cc0) (3) Data frame handling I0404 17:56:06.306257 7 log.go:172] (0xc001bf13f0) Data frame received for 5 I0404 17:56:06.306288 7 log.go:172] (0xc0011a92c0) (5) Data frame handling I0404 17:56:06.307674 7 log.go:172] (0xc001bf13f0) Data frame received for 1 I0404 17:56:06.307743 7 log.go:172] (0xc0011a9220) (1) Data frame handling I0404 17:56:06.307773 7 log.go:172] (0xc0011a9220) (1) Data frame sent I0404 17:56:06.307791 7 log.go:172] (0xc001bf13f0) (0xc0011a9220) Stream removed, broadcasting: 1 I0404 17:56:06.307809 7 log.go:172] (0xc001bf13f0) Go away received I0404 17:56:06.308127 7 log.go:172] (0xc001bf13f0) (0xc0011a9220) Stream removed, broadcasting: 1 I0404 17:56:06.308159 7 log.go:172] (0xc001bf13f0) (0xc001e59cc0) Stream removed, broadcasting: 3 I0404 17:56:06.308184 7 log.go:172] (0xc001bf13f0) (0xc0011a92c0) Stream removed, broadcasting: 5 Apr 4 17:56:06.308: INFO: Waiting for responses: map[] Apr 4 17:56:06.311: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.247:8080/dial?request=hostname&protocol=udp&host=10.244.1.246&port=8081&tries=1'] Namespace:pod-network-test-6699 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 4 17:56:06.311: INFO: >>> kubeConfig: /root/.kube/config I0404 17:56:06.344670 7 log.go:172] (0xc0050f8420) (0xc0010fe820) Create stream I0404 17:56:06.344701 7 log.go:172] (0xc0050f8420) (0xc0010fe820) Stream added, broadcasting: 1 I0404 17:56:06.346839 7 log.go:172] (0xc0050f8420) Reply frame received for 1 I0404 17:56:06.346890 7 log.go:172] (0xc0050f8420) (0xc0011a94a0) Create stream I0404 17:56:06.346906 7 log.go:172] (0xc0050f8420) (0xc0011a94a0) Stream added, broadcasting: 3 I0404 17:56:06.347722 7 log.go:172] (0xc0050f8420) Reply frame received for 3 I0404 17:56:06.347783 7 log.go:172] (0xc0050f8420) (0xc001e59e00) Create stream I0404 17:56:06.347806 7 log.go:172] (0xc0050f8420) (0xc001e59e00) Stream added, broadcasting: 5 I0404 17:56:06.348653 7 log.go:172] (0xc0050f8420) Reply frame received for 5 I0404 17:56:06.410521 7 log.go:172] (0xc0050f8420) Data frame received for 3 I0404 17:56:06.410569 7 log.go:172] (0xc0011a94a0) (3) Data frame handling I0404 17:56:06.410611 7 log.go:172] (0xc0011a94a0) (3) Data frame sent I0404 17:56:06.410775 7 log.go:172] (0xc0050f8420) Data frame received for 5 I0404 17:56:06.410801 7 log.go:172] (0xc001e59e00) (5) Data frame handling I0404 17:56:06.410823 7 log.go:172] (0xc0050f8420) Data frame received for 3 I0404 17:56:06.410863 7 log.go:172] (0xc0011a94a0) (3) Data frame handling I0404 17:56:06.412641 7 log.go:172] (0xc0050f8420) Data frame received for 1 I0404 17:56:06.412666 7 log.go:172] (0xc0010fe820) (1) Data frame handling I0404 17:56:06.412677 7 log.go:172] (0xc0010fe820) (1) Data frame sent I0404 17:56:06.412690 7 log.go:172] (0xc0050f8420) (0xc0010fe820) Stream removed, broadcasting: 1 I0404 17:56:06.412707 7 log.go:172] (0xc0050f8420) Go away received I0404 17:56:06.412808 7 log.go:172] (0xc0050f8420) (0xc0010fe820) Stream removed, broadcasting: 1 I0404 17:56:06.412827 7 log.go:172] (0xc0050f8420) (0xc0011a94a0) Stream removed, broadcasting: 3 I0404 17:56:06.412840 7 log.go:172] (0xc0050f8420) (0xc001e59e00) Stream removed, broadcasting: 5 Apr 4 17:56:06.412: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 17:56:06.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-6699" for this suite. • [SLOW TEST:26.442 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":281,"completed":103,"skipped":1795,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 17:56:06.437: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Apr 4 17:56:06.508: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Apr 4 17:56:08.392: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-930 create -f -' Apr 4 17:56:11.479: INFO: stderr: "" Apr 4 17:56:11.479: INFO: stdout: "e2e-test-crd-publish-openapi-5353-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Apr 4 17:56:11.479: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-930 delete e2e-test-crd-publish-openapi-5353-crds test-cr' Apr 4 17:56:11.601: INFO: stderr: "" Apr 4 17:56:11.601: INFO: stdout: "e2e-test-crd-publish-openapi-5353-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" Apr 4 17:56:11.601: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-930 apply -f -' Apr 4 17:56:11.867: INFO: stderr: "" Apr 4 17:56:11.867: INFO: stdout: "e2e-test-crd-publish-openapi-5353-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Apr 4 17:56:11.867: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-930 delete e2e-test-crd-publish-openapi-5353-crds test-cr' Apr 4 17:56:12.425: INFO: stderr: "" Apr 4 17:56:12.425: INFO: stdout: "e2e-test-crd-publish-openapi-5353-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Apr 4 17:56:12.425: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-5353-crds' Apr 4 17:56:12.812: INFO: stderr: "" Apr 4 17:56:12.812: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-5353-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 17:56:15.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-930" for this suite. • [SLOW TEST:9.274 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":281,"completed":104,"skipped":1826,"failed":0} [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 17:56:15.711: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting the auto-created API token Apr 4 17:56:16.301: INFO: created pod pod-service-account-defaultsa Apr 4 17:56:16.301: INFO: pod pod-service-account-defaultsa service account token volume mount: true Apr 4 17:56:16.305: INFO: created pod pod-service-account-mountsa Apr 4 17:56:16.305: INFO: pod pod-service-account-mountsa service account token volume mount: true Apr 4 17:56:16.378: INFO: created pod pod-service-account-nomountsa Apr 4 17:56:16.378: INFO: pod pod-service-account-nomountsa service account token volume mount: false Apr 4 17:56:16.395: INFO: created pod pod-service-account-defaultsa-mountspec Apr 4 17:56:16.395: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Apr 4 17:56:16.442: INFO: created pod pod-service-account-mountsa-mountspec Apr 4 17:56:16.442: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Apr 4 17:56:16.472: INFO: created pod pod-service-account-nomountsa-mountspec Apr 4 17:56:16.472: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Apr 4 17:56:16.515: INFO: created pod pod-service-account-defaultsa-nomountspec Apr 4 17:56:16.515: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Apr 4 17:56:16.528: INFO: created pod pod-service-account-mountsa-nomountspec Apr 4 17:56:16.528: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Apr 4 17:56:16.553: INFO: created pod pod-service-account-nomountsa-nomountspec Apr 4 17:56:16.553: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 17:56:16.553: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-591" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":281,"completed":105,"skipped":1826,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 17:56:16.654: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:135 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Apr 4 17:56:16.822: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Apr 4 17:56:16.832: INFO: Number of nodes with available pods: 0 Apr 4 17:56:16.832: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Apr 4 17:56:16.909: INFO: Number of nodes with available pods: 0 Apr 4 17:56:16.909: INFO: Node latest-worker is running more than one daemon pod Apr 4 17:56:17.912: INFO: Number of nodes with available pods: 0 Apr 4 17:56:17.912: INFO: Node latest-worker is running more than one daemon pod Apr 4 17:56:18.946: INFO: Number of nodes with available pods: 0 Apr 4 17:56:18.946: INFO: Node latest-worker is running more than one daemon pod Apr 4 17:56:19.913: INFO: Number of nodes with available pods: 0 Apr 4 17:56:19.913: INFO: Node latest-worker is running more than one daemon pod Apr 4 17:56:20.962: INFO: Number of nodes with available pods: 0 Apr 4 17:56:20.962: INFO: Node latest-worker is running more than one daemon pod Apr 4 17:56:22.078: INFO: Number of nodes with available pods: 0 Apr 4 17:56:22.078: INFO: Node latest-worker is running more than one daemon pod Apr 4 17:56:23.486: INFO: Number of nodes with available pods: 0 Apr 4 17:56:23.486: INFO: Node latest-worker is running more than one daemon pod Apr 4 17:56:23.926: INFO: Number of nodes with available pods: 0 Apr 4 17:56:23.926: INFO: Node latest-worker is running more than one daemon pod Apr 4 17:56:24.987: INFO: Number of nodes with available pods: 0 Apr 4 17:56:24.987: INFO: Node latest-worker is running more than one daemon pod Apr 4 17:56:26.114: INFO: Number of nodes with available pods: 0 Apr 4 17:56:26.114: INFO: Node latest-worker is running more than one daemon pod Apr 4 17:56:26.926: INFO: Number of nodes with available pods: 1 Apr 4 17:56:26.926: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Apr 4 17:56:27.250: INFO: Number of nodes with available pods: 1 Apr 4 17:56:27.250: INFO: Number of running nodes: 0, number of available pods: 1 Apr 4 17:56:28.254: INFO: Number of nodes with available pods: 0 Apr 4 17:56:28.254: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Apr 4 17:56:28.264: INFO: Number of nodes with available pods: 0 Apr 4 17:56:28.264: INFO: Node latest-worker is running more than one daemon pod Apr 4 17:56:29.268: INFO: Number of nodes with available pods: 0 Apr 4 17:56:29.268: INFO: Node latest-worker is running more than one daemon pod Apr 4 17:56:30.268: INFO: Number of nodes with available pods: 0 Apr 4 17:56:30.268: INFO: Node latest-worker is running more than one daemon pod Apr 4 17:56:31.268: INFO: Number of nodes with available pods: 0 Apr 4 17:56:31.268: INFO: Node latest-worker is running more than one daemon pod Apr 4 17:56:32.267: INFO: Number of nodes with available pods: 0 Apr 4 17:56:32.267: INFO: Node latest-worker is running more than one daemon pod Apr 4 17:56:33.268: INFO: Number of nodes with available pods: 0 Apr 4 17:56:33.268: INFO: Node latest-worker is running more than one daemon pod Apr 4 17:56:34.267: INFO: Number of nodes with available pods: 0 Apr 4 17:56:34.267: INFO: Node latest-worker is running more than one daemon pod Apr 4 17:56:35.268: INFO: Number of nodes with available pods: 0 Apr 4 17:56:35.268: INFO: Node latest-worker is running more than one daemon pod Apr 4 17:56:36.268: INFO: Number of nodes with available pods: 0 Apr 4 17:56:36.268: INFO: Node latest-worker is running more than one daemon pod Apr 4 17:56:37.269: INFO: Number of nodes with available pods: 0 Apr 4 17:56:37.269: INFO: Node latest-worker is running more than one daemon pod Apr 4 17:56:38.267: INFO: Number of nodes with available pods: 0 Apr 4 17:56:38.267: INFO: Node latest-worker is running more than one daemon pod Apr 4 17:56:39.268: INFO: Number of nodes with available pods: 0 Apr 4 17:56:39.268: INFO: Node latest-worker is running more than one daemon pod Apr 4 17:56:40.268: INFO: Number of nodes with available pods: 0 Apr 4 17:56:40.268: INFO: Node latest-worker is running more than one daemon pod Apr 4 17:56:41.267: INFO: Number of nodes with available pods: 0 Apr 4 17:56:41.267: INFO: Node latest-worker is running more than one daemon pod Apr 4 17:56:42.268: INFO: Number of nodes with available pods: 0 Apr 4 17:56:42.268: INFO: Node latest-worker is running more than one daemon pod Apr 4 17:56:43.267: INFO: Number of nodes with available pods: 0 Apr 4 17:56:43.267: INFO: Node latest-worker is running more than one daemon pod Apr 4 17:56:44.268: INFO: Number of nodes with available pods: 0 Apr 4 17:56:44.268: INFO: Node latest-worker is running more than one daemon pod Apr 4 17:56:45.268: INFO: Number of nodes with available pods: 0 Apr 4 17:56:45.268: INFO: Node latest-worker is running more than one daemon pod Apr 4 17:56:46.268: INFO: Number of nodes with available pods: 1 Apr 4 17:56:46.268: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:101 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2599, will wait for the garbage collector to delete the pods Apr 4 17:56:46.334: INFO: Deleting DaemonSet.extensions daemon-set took: 6.661685ms Apr 4 17:56:46.634: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.173646ms Apr 4 17:56:52.838: INFO: Number of nodes with available pods: 0 Apr 4 17:56:52.838: INFO: Number of running nodes: 0, number of available pods: 0 Apr 4 17:56:52.844: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-2599/daemonsets","resourceVersion":"5397888"},"items":null} Apr 4 17:56:52.847: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-2599/pods","resourceVersion":"5397888"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 17:56:52.882: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-2599" for this suite. • [SLOW TEST:36.237 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":281,"completed":106,"skipped":1876,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should not change the subpath mount on a container restart if the environment variable changes [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 17:56:52.893: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should not change the subpath mount on a container restart if the environment variable changes [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod var-expansion-d5b1f33f-cd9a-4eae-93c2-a9ce7d35e82e STEP: updating the pod Apr 4 17:56:59.502: INFO: Successfully updated pod "var-expansion-d5b1f33f-cd9a-4eae-93c2-a9ce7d35e82e" STEP: waiting for pod and container restart STEP: Failing liveness probe Apr 4 17:56:59.514: INFO: ExecWithOptions {Command:[/bin/sh -c rm /volume_mount/foo/test.log] Namespace:var-expansion-7943 PodName:var-expansion-d5b1f33f-cd9a-4eae-93c2-a9ce7d35e82e ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 4 17:56:59.514: INFO: >>> kubeConfig: /root/.kube/config I0404 17:56:59.550821 7 log.go:172] (0xc00288e370) (0xc001b4ea00) Create stream I0404 17:56:59.550849 7 log.go:172] (0xc00288e370) (0xc001b4ea00) Stream added, broadcasting: 1 I0404 17:56:59.553502 7 log.go:172] (0xc00288e370) Reply frame received for 1 I0404 17:56:59.553544 7 log.go:172] (0xc00288e370) (0xc000c26280) Create stream I0404 17:56:59.553560 7 log.go:172] (0xc00288e370) (0xc000c26280) Stream added, broadcasting: 3 I0404 17:56:59.554639 7 log.go:172] (0xc00288e370) Reply frame received for 3 I0404 17:56:59.554684 7 log.go:172] (0xc00288e370) (0xc0016a7c20) Create stream I0404 17:56:59.554701 7 log.go:172] (0xc00288e370) (0xc0016a7c20) Stream added, broadcasting: 5 I0404 17:56:59.555714 7 log.go:172] (0xc00288e370) Reply frame received for 5 I0404 17:56:59.707614 7 log.go:172] (0xc00288e370) Data frame received for 3 I0404 17:56:59.707638 7 log.go:172] (0xc000c26280) (3) Data frame handling I0404 17:56:59.707690 7 log.go:172] (0xc00288e370) Data frame received for 5 I0404 17:56:59.707723 7 log.go:172] (0xc0016a7c20) (5) Data frame handling I0404 17:56:59.708966 7 log.go:172] (0xc00288e370) Data frame received for 1 I0404 17:56:59.708988 7 log.go:172] (0xc001b4ea00) (1) Data frame handling I0404 17:56:59.709004 7 log.go:172] (0xc001b4ea00) (1) Data frame sent I0404 17:56:59.709026 7 log.go:172] (0xc00288e370) (0xc001b4ea00) Stream removed, broadcasting: 1 I0404 17:56:59.709048 7 log.go:172] (0xc00288e370) Go away received I0404 17:56:59.709283 7 log.go:172] (0xc00288e370) (0xc001b4ea00) Stream removed, broadcasting: 1 I0404 17:56:59.709306 7 log.go:172] (0xc00288e370) (0xc000c26280) Stream removed, broadcasting: 3 I0404 17:56:59.709322 7 log.go:172] (0xc00288e370) (0xc0016a7c20) Stream removed, broadcasting: 5 Apr 4 17:56:59.709: INFO: Pod exec output: / STEP: Waiting for container to restart Apr 4 17:56:59.712: INFO: Container dapi-container, restarts: 0 Apr 4 17:57:09.716: INFO: Container dapi-container, restarts: 0 Apr 4 17:57:19.716: INFO: Container dapi-container, restarts: 0 Apr 4 17:57:29.716: INFO: Container dapi-container, restarts: 0 Apr 4 17:57:39.716: INFO: Container dapi-container, restarts: 1 Apr 4 17:57:39.716: INFO: Container has restart count: 1 STEP: Rewriting the file Apr 4 17:57:39.719: INFO: ExecWithOptions {Command:[/bin/sh -c echo test-after > /volume_mount/foo/test.log] Namespace:var-expansion-7943 PodName:var-expansion-d5b1f33f-cd9a-4eae-93c2-a9ce7d35e82e ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 4 17:57:39.719: INFO: >>> kubeConfig: /root/.kube/config I0404 17:57:39.756525 7 log.go:172] (0xc002992420) (0xc000c27400) Create stream I0404 17:57:39.756568 7 log.go:172] (0xc002992420) (0xc000c27400) Stream added, broadcasting: 1 I0404 17:57:39.759119 7 log.go:172] (0xc002992420) Reply frame received for 1 I0404 17:57:39.759169 7 log.go:172] (0xc002992420) (0xc001b4ebe0) Create stream I0404 17:57:39.759186 7 log.go:172] (0xc002992420) (0xc001b4ebe0) Stream added, broadcasting: 3 I0404 17:57:39.760335 7 log.go:172] (0xc002992420) Reply frame received for 3 I0404 17:57:39.760374 7 log.go:172] (0xc002992420) (0xc000c27540) Create stream I0404 17:57:39.760390 7 log.go:172] (0xc002992420) (0xc000c27540) Stream added, broadcasting: 5 I0404 17:57:39.761822 7 log.go:172] (0xc002992420) Reply frame received for 5 I0404 17:57:39.853847 7 log.go:172] (0xc002992420) Data frame received for 5 I0404 17:57:39.854003 7 log.go:172] (0xc000c27540) (5) Data frame handling I0404 17:57:39.854569 7 log.go:172] (0xc002992420) Data frame received for 3 I0404 17:57:39.854667 7 log.go:172] (0xc001b4ebe0) (3) Data frame handling I0404 17:57:39.863871 7 log.go:172] (0xc002992420) Data frame received for 1 I0404 17:57:39.863898 7 log.go:172] (0xc000c27400) (1) Data frame handling I0404 17:57:39.863921 7 log.go:172] (0xc000c27400) (1) Data frame sent I0404 17:57:39.863934 7 log.go:172] (0xc002992420) (0xc000c27400) Stream removed, broadcasting: 1 I0404 17:57:39.863954 7 log.go:172] (0xc002992420) Go away received I0404 17:57:39.864010 7 log.go:172] (0xc002992420) (0xc000c27400) Stream removed, broadcasting: 1 I0404 17:57:39.864036 7 log.go:172] (0xc002992420) (0xc001b4ebe0) Stream removed, broadcasting: 3 I0404 17:57:39.864055 7 log.go:172] (0xc002992420) (0xc000c27540) Stream removed, broadcasting: 5 Apr 4 17:57:39.864: INFO: Pod exec output: STEP: Waiting for container to stop restarting Apr 4 17:58:09.871: INFO: Container has restart count: 2 Apr 4 17:59:11.870: INFO: Container restart has stabilized STEP: test for subpath mounted with old value Apr 4 17:59:11.873: INFO: ExecWithOptions {Command:[/bin/sh -c test -f /volume_mount/foo/test.log] Namespace:var-expansion-7943 PodName:var-expansion-d5b1f33f-cd9a-4eae-93c2-a9ce7d35e82e ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 4 17:59:11.873: INFO: >>> kubeConfig: /root/.kube/config I0404 17:59:11.902373 7 log.go:172] (0xc0023a22c0) (0xc00108f400) Create stream I0404 17:59:11.902401 7 log.go:172] (0xc0023a22c0) (0xc00108f400) Stream added, broadcasting: 1 I0404 17:59:11.903771 7 log.go:172] (0xc0023a22c0) Reply frame received for 1 I0404 17:59:11.903806 7 log.go:172] (0xc0023a22c0) (0xc00108f900) Create stream I0404 17:59:11.903813 7 log.go:172] (0xc0023a22c0) (0xc00108f900) Stream added, broadcasting: 3 I0404 17:59:11.904597 7 log.go:172] (0xc0023a22c0) Reply frame received for 3 I0404 17:59:11.904627 7 log.go:172] (0xc0023a22c0) (0xc001de41e0) Create stream I0404 17:59:11.904639 7 log.go:172] (0xc0023a22c0) (0xc001de41e0) Stream added, broadcasting: 5 I0404 17:59:11.905428 7 log.go:172] (0xc0023a22c0) Reply frame received for 5 I0404 17:59:11.983000 7 log.go:172] (0xc0023a22c0) Data frame received for 5 I0404 17:59:11.983033 7 log.go:172] (0xc001de41e0) (5) Data frame handling I0404 17:59:11.983053 7 log.go:172] (0xc0023a22c0) Data frame received for 3 I0404 17:59:11.983065 7 log.go:172] (0xc00108f900) (3) Data frame handling I0404 17:59:11.984041 7 log.go:172] (0xc0023a22c0) Data frame received for 1 I0404 17:59:11.984071 7 log.go:172] (0xc00108f400) (1) Data frame handling I0404 17:59:11.984079 7 log.go:172] (0xc00108f400) (1) Data frame sent I0404 17:59:11.984097 7 log.go:172] (0xc0023a22c0) (0xc00108f400) Stream removed, broadcasting: 1 I0404 17:59:11.984109 7 log.go:172] (0xc0023a22c0) Go away received I0404 17:59:11.984227 7 log.go:172] (0xc0023a22c0) (0xc00108f400) Stream removed, broadcasting: 1 I0404 17:59:11.984244 7 log.go:172] (0xc0023a22c0) (0xc00108f900) Stream removed, broadcasting: 3 I0404 17:59:11.984256 7 log.go:172] (0xc0023a22c0) (0xc001de41e0) Stream removed, broadcasting: 5 Apr 4 17:59:11.986: INFO: ExecWithOptions {Command:[/bin/sh -c test ! -f /volume_mount/newsubpath/test.log] Namespace:var-expansion-7943 PodName:var-expansion-d5b1f33f-cd9a-4eae-93c2-a9ce7d35e82e ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 4 17:59:11.986: INFO: >>> kubeConfig: /root/.kube/config I0404 17:59:12.136674 7 log.go:172] (0xc0026704d0) (0xc00225bea0) Create stream I0404 17:59:12.136697 7 log.go:172] (0xc0026704d0) (0xc00225bea0) Stream added, broadcasting: 1 I0404 17:59:12.138327 7 log.go:172] (0xc0026704d0) Reply frame received for 1 I0404 17:59:12.138351 7 log.go:172] (0xc0026704d0) (0xc00108fb80) Create stream I0404 17:59:12.138359 7 log.go:172] (0xc0026704d0) (0xc00108fb80) Stream added, broadcasting: 3 I0404 17:59:12.138966 7 log.go:172] (0xc0026704d0) Reply frame received for 3 I0404 17:59:12.138993 7 log.go:172] (0xc0026704d0) (0xc001de4280) Create stream I0404 17:59:12.139004 7 log.go:172] (0xc0026704d0) (0xc001de4280) Stream added, broadcasting: 5 I0404 17:59:12.139520 7 log.go:172] (0xc0026704d0) Reply frame received for 5 I0404 17:59:12.222792 7 log.go:172] (0xc0026704d0) Data frame received for 5 I0404 17:59:12.222815 7 log.go:172] (0xc001de4280) (5) Data frame handling I0404 17:59:12.222829 7 log.go:172] (0xc0026704d0) Data frame received for 3 I0404 17:59:12.222847 7 log.go:172] (0xc00108fb80) (3) Data frame handling I0404 17:59:12.224079 7 log.go:172] (0xc0026704d0) Data frame received for 1 I0404 17:59:12.224101 7 log.go:172] (0xc00225bea0) (1) Data frame handling I0404 17:59:12.224116 7 log.go:172] (0xc00225bea0) (1) Data frame sent I0404 17:59:12.224138 7 log.go:172] (0xc0026704d0) (0xc00225bea0) Stream removed, broadcasting: 1 I0404 17:59:12.224158 7 log.go:172] (0xc0026704d0) Go away received I0404 17:59:12.224350 7 log.go:172] (0xc0026704d0) (0xc00225bea0) Stream removed, broadcasting: 1 I0404 17:59:12.224386 7 log.go:172] (0xc0026704d0) (0xc00108fb80) Stream removed, broadcasting: 3 I0404 17:59:12.224415 7 log.go:172] (0xc0026704d0) (0xc001de4280) Stream removed, broadcasting: 5 Apr 4 17:59:12.224: INFO: Deleting pod "var-expansion-d5b1f33f-cd9a-4eae-93c2-a9ce7d35e82e" in namespace "var-expansion-7943" Apr 4 17:59:12.229: INFO: Wait up to 5m0s for pod "var-expansion-d5b1f33f-cd9a-4eae-93c2-a9ce7d35e82e" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 17:59:46.317: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-7943" for this suite. • [SLOW TEST:173.434 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should not change the subpath mount on a container restart if the environment variable changes [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should not change the subpath mount on a container restart if the environment variable changes [sig-storage][Slow] [Conformance]","total":281,"completed":107,"skipped":1968,"failed":0} SS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 17:59:46.327: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Apr 4 17:59:46.429: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7261b466-864f-4920-9697-6e0c17ed7c88" in namespace "downward-api-919" to be "Succeeded or Failed" Apr 4 17:59:46.444: INFO: Pod "downwardapi-volume-7261b466-864f-4920-9697-6e0c17ed7c88": Phase="Pending", Reason="", readiness=false. Elapsed: 14.798982ms Apr 4 17:59:48.448: INFO: Pod "downwardapi-volume-7261b466-864f-4920-9697-6e0c17ed7c88": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019218764s Apr 4 17:59:50.452: INFO: Pod "downwardapi-volume-7261b466-864f-4920-9697-6e0c17ed7c88": Phase="Pending", Reason="", readiness=false. Elapsed: 4.022783099s Apr 4 17:59:52.455: INFO: Pod "downwardapi-volume-7261b466-864f-4920-9697-6e0c17ed7c88": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.026155342s STEP: Saw pod success Apr 4 17:59:52.455: INFO: Pod "downwardapi-volume-7261b466-864f-4920-9697-6e0c17ed7c88" satisfied condition "Succeeded or Failed" Apr 4 17:59:52.474: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-7261b466-864f-4920-9697-6e0c17ed7c88 container client-container: STEP: delete the pod Apr 4 17:59:52.837: INFO: Waiting for pod downwardapi-volume-7261b466-864f-4920-9697-6e0c17ed7c88 to disappear Apr 4 17:59:52.866: INFO: Pod downwardapi-volume-7261b466-864f-4920-9697-6e0c17ed7c88 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 17:59:52.866: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-919" for this suite. • [SLOW TEST:6.547 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":281,"completed":108,"skipped":1970,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 17:59:52.874: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:75 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Apr 4 17:59:53.031: INFO: Creating deployment "test-recreate-deployment" Apr 4 17:59:53.079: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Apr 4 17:59:53.173: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Apr 4 17:59:55.293: INFO: Waiting deployment "test-recreate-deployment" to complete Apr 4 17:59:55.295: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721619993, loc:(*time.Location)(0x7bcb460)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721619993, loc:(*time.Location)(0x7bcb460)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721619993, loc:(*time.Location)(0x7bcb460)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721619993, loc:(*time.Location)(0x7bcb460)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-846c7dd955\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 4 17:59:57.458: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Apr 4 17:59:57.464: INFO: Updating deployment test-recreate-deployment Apr 4 17:59:57.464: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 Apr 4 17:59:58.354: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-9190 /apis/apps/v1/namespaces/deployment-9190/deployments/test-recreate-deployment a0bf4463-e5dc-4011-b06d-7cdd780bc405 5398539 2 2020-04-04 17:59:53 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc005548538 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-04-04 17:59:57 +0000 UTC,LastTransitionTime:2020-04-04 17:59:57 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-5f94c574ff" is progressing.,LastUpdateTime:2020-04-04 17:59:58 +0000 UTC,LastTransitionTime:2020-04-04 17:59:53 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} Apr 4 17:59:58.794: INFO: New ReplicaSet "test-recreate-deployment-5f94c574ff" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-5f94c574ff deployment-9190 /apis/apps/v1/namespaces/deployment-9190/replicasets/test-recreate-deployment-5f94c574ff 074e8940-82a0-439e-a875-7155689fc6ee 5398538 1 2020-04-04 17:59:57 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment a0bf4463-e5dc-4011-b06d-7cdd780bc405 0xc005548947 0xc005548948}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5f94c574ff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0055489a8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 4 17:59:58.794: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Apr 4 17:59:58.794: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-846c7dd955 deployment-9190 /apis/apps/v1/namespaces/deployment-9190/replicasets/test-recreate-deployment-846c7dd955 8daf7ed1-b50b-4463-858b-e23dfa672121 5398528 2 2020-04-04 17:59:53 +0000 UTC map[name:sample-pod-3 pod-template-hash:846c7dd955] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment a0bf4463-e5dc-4011-b06d-7cdd780bc405 0xc005548a17 0xc005548a18}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 846c7dd955,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:846c7dd955] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc005548a88 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 4 17:59:58.844: INFO: Pod "test-recreate-deployment-5f94c574ff-6xs8w" is not available: &Pod{ObjectMeta:{test-recreate-deployment-5f94c574ff-6xs8w test-recreate-deployment-5f94c574ff- deployment-9190 /api/v1/namespaces/deployment-9190/pods/test-recreate-deployment-5f94c574ff-6xs8w 02034c16-6f85-4cda-935a-4aca21eebe7e 5398541 0 2020-04-04 17:59:57 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [{apps/v1 ReplicaSet test-recreate-deployment-5f94c574ff 074e8940-82a0-439e-a875-7155689fc6ee 0xc0045a6337 0xc0045a6338}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vcwc4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vcwc4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vcwc4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 17:59:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 17:59:58 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 17:59:58 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 17:59:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-04-04 17:59:58 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 17:59:58.844: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-9190" for this suite. • [SLOW TEST:6.075 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":281,"completed":109,"skipped":2001,"failed":0} SSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 17:59:58.949: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service externalname-service with the type=ExternalName in namespace services-7607 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-7607 I0404 17:59:59.742146 7 runners.go:190] Created replication controller with name: externalname-service, namespace: services-7607, replica count: 2 I0404 18:00:02.792622 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0404 18:00:05.792865 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 4 18:00:05.792: INFO: Creating new exec pod Apr 4 18:00:10.811: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-7607 execpod5p99s -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Apr 4 18:00:11.035: INFO: stderr: "I0404 18:00:10.940909 2206 log.go:172] (0xc0006ee630) (0xc0006ea1e0) Create stream\nI0404 18:00:10.940992 2206 log.go:172] (0xc0006ee630) (0xc0006ea1e0) Stream added, broadcasting: 1\nI0404 18:00:10.944378 2206 log.go:172] (0xc0006ee630) Reply frame received for 1\nI0404 18:00:10.944413 2206 log.go:172] (0xc0006ee630) (0xc0006e0fa0) Create stream\nI0404 18:00:10.944425 2206 log.go:172] (0xc0006ee630) (0xc0006e0fa0) Stream added, broadcasting: 3\nI0404 18:00:10.945739 2206 log.go:172] (0xc0006ee630) Reply frame received for 3\nI0404 18:00:10.945791 2206 log.go:172] (0xc0006ee630) (0xc0006ea280) Create stream\nI0404 18:00:10.945806 2206 log.go:172] (0xc0006ee630) (0xc0006ea280) Stream added, broadcasting: 5\nI0404 18:00:10.946890 2206 log.go:172] (0xc0006ee630) Reply frame received for 5\nI0404 18:00:11.029019 2206 log.go:172] (0xc0006ee630) Data frame received for 5\nI0404 18:00:11.029053 2206 log.go:172] (0xc0006ea280) (5) Data frame handling\nI0404 18:00:11.029062 2206 log.go:172] (0xc0006ea280) (5) Data frame sent\nI0404 18:00:11.029068 2206 log.go:172] (0xc0006ee630) Data frame received for 5\nI0404 18:00:11.029073 2206 log.go:172] (0xc0006ea280) (5) Data frame handling\nI0404 18:00:11.029083 2206 log.go:172] (0xc0006ee630) Data frame received for 3\nI0404 18:00:11.029088 2206 log.go:172] (0xc0006e0fa0) (3) Data frame handling\n+ nc -zv -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0404 18:00:11.030575 2206 log.go:172] (0xc0006ee630) Data frame received for 1\nI0404 18:00:11.030596 2206 log.go:172] (0xc0006ea1e0) (1) Data frame handling\nI0404 18:00:11.030617 2206 log.go:172] (0xc0006ea1e0) (1) Data frame sent\nI0404 18:00:11.030628 2206 log.go:172] (0xc0006ee630) (0xc0006ea1e0) Stream removed, broadcasting: 1\nI0404 18:00:11.030667 2206 log.go:172] (0xc0006ee630) Go away received\nI0404 18:00:11.030879 2206 log.go:172] (0xc0006ee630) (0xc0006ea1e0) Stream removed, broadcasting: 1\nI0404 18:00:11.030892 2206 log.go:172] (0xc0006ee630) (0xc0006e0fa0) Stream removed, broadcasting: 3\nI0404 18:00:11.030897 2206 log.go:172] (0xc0006ee630) (0xc0006ea280) Stream removed, broadcasting: 5\n" Apr 4 18:00:11.035: INFO: stdout: "" Apr 4 18:00:11.036: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-7607 execpod5p99s -- /bin/sh -x -c nc -zv -t -w 2 10.96.180.152 80' Apr 4 18:00:11.257: INFO: stderr: "I0404 18:00:11.169313 2227 log.go:172] (0xc0005369a0) (0xc0007400a0) Create stream\nI0404 18:00:11.169387 2227 log.go:172] (0xc0005369a0) (0xc0007400a0) Stream added, broadcasting: 1\nI0404 18:00:11.172270 2227 log.go:172] (0xc0005369a0) Reply frame received for 1\nI0404 18:00:11.172348 2227 log.go:172] (0xc0005369a0) (0xc0006a9040) Create stream\nI0404 18:00:11.172378 2227 log.go:172] (0xc0005369a0) (0xc0006a9040) Stream added, broadcasting: 3\nI0404 18:00:11.173484 2227 log.go:172] (0xc0005369a0) Reply frame received for 3\nI0404 18:00:11.173676 2227 log.go:172] (0xc0005369a0) (0xc0007401e0) Create stream\nI0404 18:00:11.173738 2227 log.go:172] (0xc0005369a0) (0xc0007401e0) Stream added, broadcasting: 5\nI0404 18:00:11.175422 2227 log.go:172] (0xc0005369a0) Reply frame received for 5\nI0404 18:00:11.250597 2227 log.go:172] (0xc0005369a0) Data frame received for 3\nI0404 18:00:11.250654 2227 log.go:172] (0xc0006a9040) (3) Data frame handling\nI0404 18:00:11.250676 2227 log.go:172] (0xc0005369a0) Data frame received for 5\nI0404 18:00:11.250684 2227 log.go:172] (0xc0007401e0) (5) Data frame handling\nI0404 18:00:11.250693 2227 log.go:172] (0xc0007401e0) (5) Data frame sent\nI0404 18:00:11.250700 2227 log.go:172] (0xc0005369a0) Data frame received for 5\nI0404 18:00:11.250725 2227 log.go:172] (0xc0007401e0) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.180.152 80\nConnection to 10.96.180.152 80 port [tcp/http] succeeded!\nI0404 18:00:11.252018 2227 log.go:172] (0xc0005369a0) Data frame received for 1\nI0404 18:00:11.252044 2227 log.go:172] (0xc0007400a0) (1) Data frame handling\nI0404 18:00:11.252063 2227 log.go:172] (0xc0007400a0) (1) Data frame sent\nI0404 18:00:11.252086 2227 log.go:172] (0xc0005369a0) (0xc0007400a0) Stream removed, broadcasting: 1\nI0404 18:00:11.252106 2227 log.go:172] (0xc0005369a0) Go away received\nI0404 18:00:11.252477 2227 log.go:172] (0xc0005369a0) (0xc0007400a0) Stream removed, broadcasting: 1\nI0404 18:00:11.252503 2227 log.go:172] (0xc0005369a0) (0xc0006a9040) Stream removed, broadcasting: 3\nI0404 18:00:11.252513 2227 log.go:172] (0xc0005369a0) (0xc0007401e0) Stream removed, broadcasting: 5\n" Apr 4 18:00:11.257: INFO: stdout: "" Apr 4 18:00:11.257: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-7607 execpod5p99s -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.13 30663' Apr 4 18:00:11.457: INFO: stderr: "I0404 18:00:11.382955 2249 log.go:172] (0xc0003c7a20) (0xc0008250e0) Create stream\nI0404 18:00:11.383027 2249 log.go:172] (0xc0003c7a20) (0xc0008250e0) Stream added, broadcasting: 1\nI0404 18:00:11.385819 2249 log.go:172] (0xc0003c7a20) Reply frame received for 1\nI0404 18:00:11.385851 2249 log.go:172] (0xc0003c7a20) (0xc00096e000) Create stream\nI0404 18:00:11.385858 2249 log.go:172] (0xc0003c7a20) (0xc00096e000) Stream added, broadcasting: 3\nI0404 18:00:11.386864 2249 log.go:172] (0xc0003c7a20) Reply frame received for 3\nI0404 18:00:11.386924 2249 log.go:172] (0xc0003c7a20) (0xc000992000) Create stream\nI0404 18:00:11.386940 2249 log.go:172] (0xc0003c7a20) (0xc000992000) Stream added, broadcasting: 5\nI0404 18:00:11.387695 2249 log.go:172] (0xc0003c7a20) Reply frame received for 5\nI0404 18:00:11.446305 2249 log.go:172] (0xc0003c7a20) Data frame received for 5\nI0404 18:00:11.446384 2249 log.go:172] (0xc000992000) (5) Data frame handling\nI0404 18:00:11.446412 2249 log.go:172] (0xc000992000) (5) Data frame sent\nI0404 18:00:11.446443 2249 log.go:172] (0xc0003c7a20) Data frame received for 5\nI0404 18:00:11.446456 2249 log.go:172] (0xc000992000) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.13 30663\nConnection to 172.17.0.13 30663 port [tcp/30663] succeeded!\nI0404 18:00:11.446471 2249 log.go:172] (0xc0003c7a20) Data frame received for 3\nI0404 18:00:11.446483 2249 log.go:172] (0xc00096e000) (3) Data frame handling\nI0404 18:00:11.451482 2249 log.go:172] (0xc0003c7a20) Data frame received for 1\nI0404 18:00:11.451516 2249 log.go:172] (0xc0008250e0) (1) Data frame handling\nI0404 18:00:11.451536 2249 log.go:172] (0xc0008250e0) (1) Data frame sent\nI0404 18:00:11.451548 2249 log.go:172] (0xc0003c7a20) (0xc0008250e0) Stream removed, broadcasting: 1\nI0404 18:00:11.451564 2249 log.go:172] (0xc0003c7a20) Go away received\nI0404 18:00:11.452173 2249 log.go:172] (0xc0003c7a20) (0xc0008250e0) Stream removed, broadcasting: 1\nI0404 18:00:11.452206 2249 log.go:172] (0xc0003c7a20) (0xc00096e000) Stream removed, broadcasting: 3\nI0404 18:00:11.452224 2249 log.go:172] (0xc0003c7a20) (0xc000992000) Stream removed, broadcasting: 5\n" Apr 4 18:00:11.457: INFO: stdout: "" Apr 4 18:00:11.457: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-7607 execpod5p99s -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.12 30663' Apr 4 18:00:11.647: INFO: stderr: "I0404 18:00:11.574638 2271 log.go:172] (0xc000bfe0b0) (0xc000a4a000) Create stream\nI0404 18:00:11.574701 2271 log.go:172] (0xc000bfe0b0) (0xc000a4a000) Stream added, broadcasting: 1\nI0404 18:00:11.576518 2271 log.go:172] (0xc000bfe0b0) Reply frame received for 1\nI0404 18:00:11.576540 2271 log.go:172] (0xc000bfe0b0) (0xc000a4a0a0) Create stream\nI0404 18:00:11.576547 2271 log.go:172] (0xc000bfe0b0) (0xc000a4a0a0) Stream added, broadcasting: 3\nI0404 18:00:11.577987 2271 log.go:172] (0xc000bfe0b0) Reply frame received for 3\nI0404 18:00:11.578022 2271 log.go:172] (0xc000bfe0b0) (0xc000a4a1e0) Create stream\nI0404 18:00:11.578034 2271 log.go:172] (0xc000bfe0b0) (0xc000a4a1e0) Stream added, broadcasting: 5\nI0404 18:00:11.578964 2271 log.go:172] (0xc000bfe0b0) Reply frame received for 5\nI0404 18:00:11.640051 2271 log.go:172] (0xc000bfe0b0) Data frame received for 5\nI0404 18:00:11.640087 2271 log.go:172] (0xc000a4a1e0) (5) Data frame handling\nI0404 18:00:11.640110 2271 log.go:172] (0xc000a4a1e0) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.12 30663\nConnection to 172.17.0.12 30663 port [tcp/30663] succeeded!\nI0404 18:00:11.641512 2271 log.go:172] (0xc000bfe0b0) Data frame received for 3\nI0404 18:00:11.641533 2271 log.go:172] (0xc000a4a0a0) (3) Data frame handling\nI0404 18:00:11.641574 2271 log.go:172] (0xc000bfe0b0) Data frame received for 5\nI0404 18:00:11.641615 2271 log.go:172] (0xc000a4a1e0) (5) Data frame handling\nI0404 18:00:11.642151 2271 log.go:172] (0xc000bfe0b0) Data frame received for 1\nI0404 18:00:11.642171 2271 log.go:172] (0xc000a4a000) (1) Data frame handling\nI0404 18:00:11.642186 2271 log.go:172] (0xc000a4a000) (1) Data frame sent\nI0404 18:00:11.642264 2271 log.go:172] (0xc000bfe0b0) (0xc000a4a000) Stream removed, broadcasting: 1\nI0404 18:00:11.642304 2271 log.go:172] (0xc000bfe0b0) Go away received\nI0404 18:00:11.642611 2271 log.go:172] (0xc000bfe0b0) (0xc000a4a000) Stream removed, broadcasting: 1\nI0404 18:00:11.642627 2271 log.go:172] (0xc000bfe0b0) (0xc000a4a0a0) Stream removed, broadcasting: 3\nI0404 18:00:11.642634 2271 log.go:172] (0xc000bfe0b0) (0xc000a4a1e0) Stream removed, broadcasting: 5\n" Apr 4 18:00:11.648: INFO: stdout: "" Apr 4 18:00:11.648: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:00:11.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7607" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:12.739 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":281,"completed":110,"skipped":2007,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:00:11.689: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:00:16.194: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-1896" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":281,"completed":111,"skipped":2038,"failed":0} SSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:00:16.295: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod Apr 4 18:00:16.344: INFO: PodSpec: initContainers in spec.initContainers Apr 4 18:01:03.812: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-664a28f6-4a22-43b5-af3c-be339df3b2b1", GenerateName:"", Namespace:"init-container-4139", SelfLink:"/api/v1/namespaces/init-container-4139/pods/pod-init-664a28f6-4a22-43b5-af3c-be339df3b2b1", UID:"6ab5b0d1-27dd-47bb-8881-48a6fca88999", ResourceVersion:"5398989", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63721620016, loc:(*time.Location)(0x7bcb460)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"344320347"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-9wt52", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc002c7e300), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-9wt52", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-9wt52", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.2", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-9wt52", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc004536648), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"latest-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc00233c620), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0045366d0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0045366f0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0045366f8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0045366fc), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721620016, loc:(*time.Location)(0x7bcb460)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721620016, loc:(*time.Location)(0x7bcb460)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721620016, loc:(*time.Location)(0x7bcb460)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721620016, loc:(*time.Location)(0x7bcb460)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.12", PodIP:"10.244.1.4", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.1.4"}}, StartTime:(*v1.Time)(0xc0027203c0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00233c700)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00233c770)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://970e608a7cf8c6ef481b130f42ef6a0bc686b88e0d6222a032dd787863354562", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002720420), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0027203e0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.2", ImageID:"", ContainerID:"", Started:(*bool)(0xc00453677f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:01:03.813: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-4139" for this suite. • [SLOW TEST:47.590 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":281,"completed":112,"skipped":2046,"failed":0} SS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:01:03.885: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Apr 4 18:01:03.961: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1194d5c9-7645-4ad8-b969-a096082e6a3a" in namespace "downward-api-2158" to be "Succeeded or Failed" Apr 4 18:01:03.969: INFO: Pod "downwardapi-volume-1194d5c9-7645-4ad8-b969-a096082e6a3a": Phase="Pending", Reason="", readiness=false. Elapsed: 7.487322ms Apr 4 18:01:05.973: INFO: Pod "downwardapi-volume-1194d5c9-7645-4ad8-b969-a096082e6a3a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011787923s Apr 4 18:01:07.976: INFO: Pod "downwardapi-volume-1194d5c9-7645-4ad8-b969-a096082e6a3a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015418275s STEP: Saw pod success Apr 4 18:01:07.977: INFO: Pod "downwardapi-volume-1194d5c9-7645-4ad8-b969-a096082e6a3a" satisfied condition "Succeeded or Failed" Apr 4 18:01:07.980: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-1194d5c9-7645-4ad8-b969-a096082e6a3a container client-container: STEP: delete the pod Apr 4 18:01:08.010: INFO: Waiting for pod downwardapi-volume-1194d5c9-7645-4ad8-b969-a096082e6a3a to disappear Apr 4 18:01:08.014: INFO: Pod downwardapi-volume-1194d5c9-7645-4ad8-b969-a096082e6a3a no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:01:08.014: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2158" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":281,"completed":113,"skipped":2048,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:01:08.020: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Apr 4 18:01:08.124: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c9e9df30-c8e5-407a-be54-94d6972379c0" in namespace "projected-6568" to be "Succeeded or Failed" Apr 4 18:01:08.141: INFO: Pod "downwardapi-volume-c9e9df30-c8e5-407a-be54-94d6972379c0": Phase="Pending", Reason="", readiness=false. Elapsed: 17.30684ms Apr 4 18:01:10.145: INFO: Pod "downwardapi-volume-c9e9df30-c8e5-407a-be54-94d6972379c0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021415247s Apr 4 18:01:12.150: INFO: Pod "downwardapi-volume-c9e9df30-c8e5-407a-be54-94d6972379c0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026381314s STEP: Saw pod success Apr 4 18:01:12.150: INFO: Pod "downwardapi-volume-c9e9df30-c8e5-407a-be54-94d6972379c0" satisfied condition "Succeeded or Failed" Apr 4 18:01:12.153: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-c9e9df30-c8e5-407a-be54-94d6972379c0 container client-container: STEP: delete the pod Apr 4 18:01:12.171: INFO: Waiting for pod downwardapi-volume-c9e9df30-c8e5-407a-be54-94d6972379c0 to disappear Apr 4 18:01:12.181: INFO: Pod downwardapi-volume-c9e9df30-c8e5-407a-be54-94d6972379c0 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:01:12.182: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6568" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":281,"completed":114,"skipped":2058,"failed":0} ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:01:12.189: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Apr 4 18:01:12.303: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f2d4c940-1b7d-4756-806b-1c09fc64d10f" in namespace "downward-api-9611" to be "Succeeded or Failed" Apr 4 18:01:12.315: INFO: Pod "downwardapi-volume-f2d4c940-1b7d-4756-806b-1c09fc64d10f": Phase="Pending", Reason="", readiness=false. Elapsed: 11.725726ms Apr 4 18:01:14.318: INFO: Pod "downwardapi-volume-f2d4c940-1b7d-4756-806b-1c09fc64d10f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014829901s Apr 4 18:01:16.322: INFO: Pod "downwardapi-volume-f2d4c940-1b7d-4756-806b-1c09fc64d10f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019275853s STEP: Saw pod success Apr 4 18:01:16.322: INFO: Pod "downwardapi-volume-f2d4c940-1b7d-4756-806b-1c09fc64d10f" satisfied condition "Succeeded or Failed" Apr 4 18:01:16.325: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-f2d4c940-1b7d-4756-806b-1c09fc64d10f container client-container: STEP: delete the pod Apr 4 18:01:16.343: INFO: Waiting for pod downwardapi-volume-f2d4c940-1b7d-4756-806b-1c09fc64d10f to disappear Apr 4 18:01:16.354: INFO: Pod downwardapi-volume-f2d4c940-1b7d-4756-806b-1c09fc64d10f no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:01:16.354: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9611" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":281,"completed":115,"skipped":2058,"failed":0} SSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:01:16.359: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-551.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-551.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-551.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-551.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-551.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-551.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 4 18:01:24.784: INFO: DNS probes using dns-551/dns-test-14cabaf3-442e-4e5e-9799-32d690edb620 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:01:24.872: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-551" for this suite. • [SLOW TEST:8.544 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":281,"completed":116,"skipped":2063,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:01:24.903: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-bd84156f-1ee6-466c-82c7-83d1328c118d STEP: Creating a pod to test consume secrets Apr 4 18:01:25.327: INFO: Waiting up to 5m0s for pod "pod-secrets-cadf2cfb-b15b-4b22-ad6e-af4644e707da" in namespace "secrets-1205" to be "Succeeded or Failed" Apr 4 18:01:25.360: INFO: Pod "pod-secrets-cadf2cfb-b15b-4b22-ad6e-af4644e707da": Phase="Pending", Reason="", readiness=false. Elapsed: 32.413876ms Apr 4 18:01:27.364: INFO: Pod "pod-secrets-cadf2cfb-b15b-4b22-ad6e-af4644e707da": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036289119s Apr 4 18:01:29.368: INFO: Pod "pod-secrets-cadf2cfb-b15b-4b22-ad6e-af4644e707da": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.04073866s STEP: Saw pod success Apr 4 18:01:29.368: INFO: Pod "pod-secrets-cadf2cfb-b15b-4b22-ad6e-af4644e707da" satisfied condition "Succeeded or Failed" Apr 4 18:01:29.371: INFO: Trying to get logs from node latest-worker pod pod-secrets-cadf2cfb-b15b-4b22-ad6e-af4644e707da container secret-env-test: STEP: delete the pod Apr 4 18:01:29.393: INFO: Waiting for pod pod-secrets-cadf2cfb-b15b-4b22-ad6e-af4644e707da to disappear Apr 4 18:01:29.398: INFO: Pod pod-secrets-cadf2cfb-b15b-4b22-ad6e-af4644e707da no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:01:29.398: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1205" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":281,"completed":117,"skipped":2069,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:01:29.406: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Apr 4 18:01:29.453: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties Apr 4 18:01:32.341: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-628 create -f -' Apr 4 18:01:35.524: INFO: stderr: "" Apr 4 18:01:35.524: INFO: stdout: "e2e-test-crd-publish-openapi-7662-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Apr 4 18:01:35.524: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-628 delete e2e-test-crd-publish-openapi-7662-crds test-foo' Apr 4 18:01:35.629: INFO: stderr: "" Apr 4 18:01:35.629: INFO: stdout: "e2e-test-crd-publish-openapi-7662-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" Apr 4 18:01:35.629: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-628 apply -f -' Apr 4 18:01:35.889: INFO: stderr: "" Apr 4 18:01:35.889: INFO: stdout: "e2e-test-crd-publish-openapi-7662-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Apr 4 18:01:35.889: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-628 delete e2e-test-crd-publish-openapi-7662-crds test-foo' Apr 4 18:01:36.052: INFO: stderr: "" Apr 4 18:01:36.052: INFO: stdout: "e2e-test-crd-publish-openapi-7662-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema Apr 4 18:01:36.052: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-628 create -f -' Apr 4 18:01:36.278: INFO: rc: 1 Apr 4 18:01:36.278: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-628 apply -f -' Apr 4 18:01:36.506: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties Apr 4 18:01:36.506: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-628 create -f -' Apr 4 18:01:36.715: INFO: rc: 1 Apr 4 18:01:36.715: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-628 apply -f -' Apr 4 18:01:37.004: INFO: rc: 1 STEP: kubectl explain works to explain CR properties Apr 4 18:01:37.005: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7662-crds' Apr 4 18:01:37.215: INFO: stderr: "" Apr 4 18:01:37.215: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-7662-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively Apr 4 18:01:37.216: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7662-crds.metadata' Apr 4 18:01:37.906: INFO: stderr: "" Apr 4 18:01:37.906: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-7662-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC. Populated by the system.\n Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested. Populated by the system when a graceful deletion is\n requested. Read-only. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server. If this field is specified and the generated name exists, the\n server will NOT return a 409 - instead, it will either return 201 Created\n or 500 with Reason ServerTimeout indicating a unique name could not be\n found in the time allotted, and the client should retry (optionally after\n the time indicated in the Retry-After header). Applied only if Name is not\n specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty. Must\n be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources. Populated by the system.\n Read-only. Value must be treated as opaque by clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n release and the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations. Populated by the system. Read-only.\n More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" Apr 4 18:01:37.907: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7662-crds.spec' Apr 4 18:01:38.173: INFO: stderr: "" Apr 4 18:01:38.173: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-7662-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" Apr 4 18:01:38.173: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7662-crds.spec.bars' Apr 4 18:01:38.466: INFO: stderr: "" Apr 4 18:01:38.466: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-7662-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist Apr 4 18:01:38.466: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7662-crds.spec.bars2' Apr 4 18:01:38.697: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:01:41.589: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-628" for this suite. • [SLOW TEST:12.195 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":281,"completed":118,"skipped":2081,"failed":0} S ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:01:41.601: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 Apr 4 18:01:41.648: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the sample API server. Apr 4 18:01:42.500: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Apr 4 18:01:44.782: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721620102, loc:(*time.Location)(0x7bcb460)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721620102, loc:(*time.Location)(0x7bcb460)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721620102, loc:(*time.Location)(0x7bcb460)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721620102, loc:(*time.Location)(0x7bcb460)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-54b47bf96b\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 4 18:01:46.806: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721620102, loc:(*time.Location)(0x7bcb460)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721620102, loc:(*time.Location)(0x7bcb460)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721620102, loc:(*time.Location)(0x7bcb460)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721620102, loc:(*time.Location)(0x7bcb460)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-54b47bf96b\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 4 18:01:48.785: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721620102, loc:(*time.Location)(0x7bcb460)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721620102, loc:(*time.Location)(0x7bcb460)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721620102, loc:(*time.Location)(0x7bcb460)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721620102, loc:(*time.Location)(0x7bcb460)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-54b47bf96b\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 4 18:01:50.788: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721620102, loc:(*time.Location)(0x7bcb460)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721620102, loc:(*time.Location)(0x7bcb460)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721620102, loc:(*time.Location)(0x7bcb460)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721620102, loc:(*time.Location)(0x7bcb460)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-54b47bf96b\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 4 18:01:52.818: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721620102, loc:(*time.Location)(0x7bcb460)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721620102, loc:(*time.Location)(0x7bcb460)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721620102, loc:(*time.Location)(0x7bcb460)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721620102, loc:(*time.Location)(0x7bcb460)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-54b47bf96b\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 4 18:01:54.786: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721620102, loc:(*time.Location)(0x7bcb460)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721620102, loc:(*time.Location)(0x7bcb460)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721620102, loc:(*time.Location)(0x7bcb460)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721620102, loc:(*time.Location)(0x7bcb460)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-54b47bf96b\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 4 18:01:57.419: INFO: Waited 625.570896ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:01:57.854: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-1962" for this suite. • [SLOW TEST:16.350 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":281,"completed":119,"skipped":2082,"failed":0} [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:01:57.952: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:180 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Apr 4 18:02:02.343: INFO: Waiting up to 5m0s for pod "client-envvars-f9c77b5b-4294-4335-b6c1-9295b95009d1" in namespace "pods-5122" to be "Succeeded or Failed" Apr 4 18:02:02.349: INFO: Pod "client-envvars-f9c77b5b-4294-4335-b6c1-9295b95009d1": Phase="Pending", Reason="", readiness=false. Elapsed: 5.903913ms Apr 4 18:02:04.353: INFO: Pod "client-envvars-f9c77b5b-4294-4335-b6c1-9295b95009d1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009708173s Apr 4 18:02:06.357: INFO: Pod "client-envvars-f9c77b5b-4294-4335-b6c1-9295b95009d1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01392714s STEP: Saw pod success Apr 4 18:02:06.357: INFO: Pod "client-envvars-f9c77b5b-4294-4335-b6c1-9295b95009d1" satisfied condition "Succeeded or Failed" Apr 4 18:02:06.360: INFO: Trying to get logs from node latest-worker2 pod client-envvars-f9c77b5b-4294-4335-b6c1-9295b95009d1 container env3cont: STEP: delete the pod Apr 4 18:02:06.381: INFO: Waiting for pod client-envvars-f9c77b5b-4294-4335-b6c1-9295b95009d1 to disappear Apr 4 18:02:06.386: INFO: Pod client-envvars-f9c77b5b-4294-4335-b6c1-9295b95009d1 no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:02:06.386: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5122" for this suite. • [SLOW TEST:8.441 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":281,"completed":120,"skipped":2082,"failed":0} SSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:02:06.393: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-1c24d6cc-c230-41b9-b73b-957cd46665f8 STEP: Creating a pod to test consume secrets Apr 4 18:02:06.502: INFO: Waiting up to 5m0s for pod "pod-secrets-d2949479-919f-4387-9fb4-0b9a713310fd" in namespace "secrets-837" to be "Succeeded or Failed" Apr 4 18:02:06.516: INFO: Pod "pod-secrets-d2949479-919f-4387-9fb4-0b9a713310fd": Phase="Pending", Reason="", readiness=false. Elapsed: 13.928613ms Apr 4 18:02:08.520: INFO: Pod "pod-secrets-d2949479-919f-4387-9fb4-0b9a713310fd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018238659s Apr 4 18:02:10.531: INFO: Pod "pod-secrets-d2949479-919f-4387-9fb4-0b9a713310fd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028782065s STEP: Saw pod success Apr 4 18:02:10.531: INFO: Pod "pod-secrets-d2949479-919f-4387-9fb4-0b9a713310fd" satisfied condition "Succeeded or Failed" Apr 4 18:02:10.534: INFO: Trying to get logs from node latest-worker pod pod-secrets-d2949479-919f-4387-9fb4-0b9a713310fd container secret-volume-test: STEP: delete the pod Apr 4 18:02:10.565: INFO: Waiting for pod pod-secrets-d2949479-919f-4387-9fb4-0b9a713310fd to disappear Apr 4 18:02:10.584: INFO: Pod pod-secrets-d2949479-919f-4387-9fb4-0b9a713310fd no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:02:10.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-837" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":281,"completed":121,"skipped":2086,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:02:10.592: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod busybox-f27fbed5-58ac-43b2-ab6f-90f515450ccd in namespace container-probe-6978 Apr 4 18:02:14.676: INFO: Started pod busybox-f27fbed5-58ac-43b2-ab6f-90f515450ccd in namespace container-probe-6978 STEP: checking the pod's current state and verifying that restartCount is present Apr 4 18:02:14.679: INFO: Initial restart count of pod busybox-f27fbed5-58ac-43b2-ab6f-90f515450ccd is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:06:15.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6978" for this suite. • [SLOW TEST:244.863 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":281,"completed":122,"skipped":2096,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a volume subpath [sig-storage] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:06:15.456: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a volume subpath [sig-storage] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test substitution in volume subpath Apr 4 18:06:15.537: INFO: Waiting up to 5m0s for pod "var-expansion-a6dd864a-1cc7-420f-b08b-c8e9d6bce420" in namespace "var-expansion-6850" to be "Succeeded or Failed" Apr 4 18:06:15.540: INFO: Pod "var-expansion-a6dd864a-1cc7-420f-b08b-c8e9d6bce420": Phase="Pending", Reason="", readiness=false. Elapsed: 2.8594ms Apr 4 18:06:17.545: INFO: Pod "var-expansion-a6dd864a-1cc7-420f-b08b-c8e9d6bce420": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007993262s Apr 4 18:06:19.549: INFO: Pod "var-expansion-a6dd864a-1cc7-420f-b08b-c8e9d6bce420": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012472564s STEP: Saw pod success Apr 4 18:06:19.550: INFO: Pod "var-expansion-a6dd864a-1cc7-420f-b08b-c8e9d6bce420" satisfied condition "Succeeded or Failed" Apr 4 18:06:19.552: INFO: Trying to get logs from node latest-worker2 pod var-expansion-a6dd864a-1cc7-420f-b08b-c8e9d6bce420 container dapi-container: STEP: delete the pod Apr 4 18:06:19.583: INFO: Waiting for pod var-expansion-a6dd864a-1cc7-420f-b08b-c8e9d6bce420 to disappear Apr 4 18:06:19.630: INFO: Pod var-expansion-a6dd864a-1cc7-420f-b08b-c8e9d6bce420 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:06:19.630: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-6850" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a volume subpath [sig-storage] [Conformance]","total":281,"completed":123,"skipped":2114,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:06:19.639: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name cm-test-opt-del-15d6ea4c-fc75-4993-b2c3-4275d00e9377 STEP: Creating configMap with name cm-test-opt-upd-1d5af84e-023c-442e-a83e-e045daf8f004 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-15d6ea4c-fc75-4993-b2c3-4275d00e9377 STEP: Updating configmap cm-test-opt-upd-1d5af84e-023c-442e-a83e-e045daf8f004 STEP: Creating configMap with name cm-test-opt-create-d38bc772-65f3-404e-8fbc-d34f9aa75691 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:07:30.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9282" for this suite. • [SLOW TEST:70.584 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":281,"completed":124,"skipped":2120,"failed":0} SSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:07:30.223: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-d39fd78d-c455-4eba-88eb-da138263acd1 STEP: Creating a pod to test consume secrets Apr 4 18:07:30.339: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-f57eae42-506c-4972-9383-33c0bda41969" in namespace "projected-813" to be "Succeeded or Failed" Apr 4 18:07:30.343: INFO: Pod "pod-projected-secrets-f57eae42-506c-4972-9383-33c0bda41969": Phase="Pending", Reason="", readiness=false. Elapsed: 3.585436ms Apr 4 18:07:32.347: INFO: Pod "pod-projected-secrets-f57eae42-506c-4972-9383-33c0bda41969": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008114867s Apr 4 18:07:34.352: INFO: Pod "pod-projected-secrets-f57eae42-506c-4972-9383-33c0bda41969": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012318092s STEP: Saw pod success Apr 4 18:07:34.352: INFO: Pod "pod-projected-secrets-f57eae42-506c-4972-9383-33c0bda41969" satisfied condition "Succeeded or Failed" Apr 4 18:07:34.355: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-f57eae42-506c-4972-9383-33c0bda41969 container projected-secret-volume-test: STEP: delete the pod Apr 4 18:07:34.395: INFO: Waiting for pod pod-projected-secrets-f57eae42-506c-4972-9383-33c0bda41969 to disappear Apr 4 18:07:34.409: INFO: Pod pod-projected-secrets-f57eae42-506c-4972-9383-33c0bda41969 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:07:34.409: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-813" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":281,"completed":125,"skipped":2127,"failed":0} ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:07:34.416: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Apr 4 18:07:34.496: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5df46be2-a2ae-4a4f-b3c5-de88dcf677d7" in namespace "projected-9784" to be "Succeeded or Failed" Apr 4 18:07:34.499: INFO: Pod "downwardapi-volume-5df46be2-a2ae-4a4f-b3c5-de88dcf677d7": Phase="Pending", Reason="", readiness=false. Elapsed: 3.621768ms Apr 4 18:07:36.554: INFO: Pod "downwardapi-volume-5df46be2-a2ae-4a4f-b3c5-de88dcf677d7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057965993s Apr 4 18:07:38.558: INFO: Pod "downwardapi-volume-5df46be2-a2ae-4a4f-b3c5-de88dcf677d7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.062438753s STEP: Saw pod success Apr 4 18:07:38.558: INFO: Pod "downwardapi-volume-5df46be2-a2ae-4a4f-b3c5-de88dcf677d7" satisfied condition "Succeeded or Failed" Apr 4 18:07:38.562: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-5df46be2-a2ae-4a4f-b3c5-de88dcf677d7 container client-container: STEP: delete the pod Apr 4 18:07:38.613: INFO: Waiting for pod downwardapi-volume-5df46be2-a2ae-4a4f-b3c5-de88dcf677d7 to disappear Apr 4 18:07:38.626: INFO: Pod downwardapi-volume-5df46be2-a2ae-4a4f-b3c5-de88dcf677d7 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:07:38.626: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9784" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":281,"completed":126,"skipped":2127,"failed":0} S ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:07:38.633: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Apr 4 18:07:38.922: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8f32de12-17f2-400a-a9bf-771a3d6b1285" in namespace "projected-7165" to be "Succeeded or Failed" Apr 4 18:07:38.925: INFO: Pod "downwardapi-volume-8f32de12-17f2-400a-a9bf-771a3d6b1285": Phase="Pending", Reason="", readiness=false. Elapsed: 3.698416ms Apr 4 18:07:40.929: INFO: Pod "downwardapi-volume-8f32de12-17f2-400a-a9bf-771a3d6b1285": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007214839s Apr 4 18:07:42.933: INFO: Pod "downwardapi-volume-8f32de12-17f2-400a-a9bf-771a3d6b1285": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01155788s STEP: Saw pod success Apr 4 18:07:42.933: INFO: Pod "downwardapi-volume-8f32de12-17f2-400a-a9bf-771a3d6b1285" satisfied condition "Succeeded or Failed" Apr 4 18:07:42.936: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-8f32de12-17f2-400a-a9bf-771a3d6b1285 container client-container: STEP: delete the pod Apr 4 18:07:42.962: INFO: Waiting for pod downwardapi-volume-8f32de12-17f2-400a-a9bf-771a3d6b1285 to disappear Apr 4 18:07:42.990: INFO: Pod downwardapi-volume-8f32de12-17f2-400a-a9bf-771a3d6b1285 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:07:42.990: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7165" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":281,"completed":127,"skipped":2128,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:07:42.998: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:75 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Apr 4 18:07:43.679: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Apr 4 18:07:48.682: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Apr 4 18:07:48.682: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 Apr 4 18:07:48.782: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-7561 /apis/apps/v1/namespaces/deployment-7561/deployments/test-cleanup-deployment 8a7a0f91-c503-4f80-89b8-b4a6fd4bd5e2 5400676 1 2020-04-04 18:07:48 +0000 UTC map[name:cleanup-pod] map[] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00383c908 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},} Apr 4 18:07:48.787: INFO: New ReplicaSet "test-cleanup-deployment-577c77b589" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-577c77b589 deployment-7561 /apis/apps/v1/namespaces/deployment-7561/replicasets/test-cleanup-deployment-577c77b589 bb4f9a75-15f2-4bc4-b77c-f548d5efc321 5400678 1 2020-04-04 18:07:48 +0000 UTC map[name:cleanup-pod pod-template-hash:577c77b589] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment 8a7a0f91-c503-4f80-89b8-b4a6fd4bd5e2 0xc00383cd97 0xc00383cd98}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 577c77b589,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:577c77b589] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00383ce08 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 4 18:07:48.787: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Apr 4 18:07:48.787: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller deployment-7561 /apis/apps/v1/namespaces/deployment-7561/replicasets/test-cleanup-controller a24f741a-f6eb-4853-9aa8-f3bbd369c8e7 5400677 1 2020-04-04 18:07:43 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment 8a7a0f91-c503-4f80-89b8-b4a6fd4bd5e2 0xc00383ccaf 0xc00383ccc0}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc00383cd28 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Apr 4 18:07:48.843: INFO: Pod "test-cleanup-controller-j8hd2" is available: &Pod{ObjectMeta:{test-cleanup-controller-j8hd2 test-cleanup-controller- deployment-7561 /api/v1/namespaces/deployment-7561/pods/test-cleanup-controller-j8hd2 afc725ee-03b9-4637-9579-7278f487553d 5400662 0 2020-04-04 18:07:43 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 ReplicaSet test-cleanup-controller a24f741a-f6eb-4853-9aa8-f3bbd369c8e7 0xc00383d2b7 0xc00383d2b8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-95gzb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-95gzb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-95gzb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 18:07:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 18:07:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 18:07:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 18:07:43 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.1.13,StartTime:2020-04-04 18:07:43 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-04 18:07:46 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://63fd51858d723b8994848db6661ca2ac9f91850afc35f378c9e6825f42736d4a,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.13,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 4 18:07:48.843: INFO: Pod "test-cleanup-deployment-577c77b589-s296k" is not available: &Pod{ObjectMeta:{test-cleanup-deployment-577c77b589-s296k test-cleanup-deployment-577c77b589- deployment-7561 /api/v1/namespaces/deployment-7561/pods/test-cleanup-deployment-577c77b589-s296k ba68e6f6-9513-4fed-8b59-fdce2ac7428b 5400685 0 2020-04-04 18:07:48 +0000 UTC map[name:cleanup-pod pod-template-hash:577c77b589] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-577c77b589 bb4f9a75-15f2-4bc4-b77c-f548d5efc321 0xc00383d447 0xc00383d448}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-95gzb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-95gzb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-95gzb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 18:07:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:07:48.843: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-7561" for this suite. • [SLOW TEST:5.920 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":281,"completed":128,"skipped":2148,"failed":0} [sig-network] Services should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:07:48.918: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service nodeport-test with type=NodePort in namespace services-7843 STEP: creating replication controller nodeport-test in namespace services-7843 I0404 18:07:49.087317 7 runners.go:190] Created replication controller with name: nodeport-test, namespace: services-7843, replica count: 2 I0404 18:07:52.137713 7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0404 18:07:55.137954 7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 4 18:07:55.138: INFO: Creating new exec pod Apr 4 18:08:00.180: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-7843 execpod54wfc -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80' Apr 4 18:08:00.384: INFO: stderr: "I0404 18:08:00.315558 2585 log.go:172] (0xc0009922c0) (0xc0006c9220) Create stream\nI0404 18:08:00.315613 2585 log.go:172] (0xc0009922c0) (0xc0006c9220) Stream added, broadcasting: 1\nI0404 18:08:00.318178 2585 log.go:172] (0xc0009922c0) Reply frame received for 1\nI0404 18:08:00.318235 2585 log.go:172] (0xc0009922c0) (0xc0009b2000) Create stream\nI0404 18:08:00.318266 2585 log.go:172] (0xc0009922c0) (0xc0009b2000) Stream added, broadcasting: 3\nI0404 18:08:00.319094 2585 log.go:172] (0xc0009922c0) Reply frame received for 3\nI0404 18:08:00.319153 2585 log.go:172] (0xc0009922c0) (0xc000946000) Create stream\nI0404 18:08:00.319178 2585 log.go:172] (0xc0009922c0) (0xc000946000) Stream added, broadcasting: 5\nI0404 18:08:00.320074 2585 log.go:172] (0xc0009922c0) Reply frame received for 5\nI0404 18:08:00.376603 2585 log.go:172] (0xc0009922c0) Data frame received for 5\nI0404 18:08:00.376635 2585 log.go:172] (0xc000946000) (5) Data frame handling\nI0404 18:08:00.376654 2585 log.go:172] (0xc000946000) (5) Data frame sent\nI0404 18:08:00.376663 2585 log.go:172] (0xc0009922c0) Data frame received for 5\nI0404 18:08:00.376671 2585 log.go:172] (0xc000946000) (5) Data frame handling\n+ nc -zv -t -w 2 nodeport-test 80\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0404 18:08:00.376692 2585 log.go:172] (0xc000946000) (5) Data frame sent\nI0404 18:08:00.377063 2585 log.go:172] (0xc0009922c0) Data frame received for 5\nI0404 18:08:00.377100 2585 log.go:172] (0xc000946000) (5) Data frame handling\nI0404 18:08:00.377285 2585 log.go:172] (0xc0009922c0) Data frame received for 3\nI0404 18:08:00.377304 2585 log.go:172] (0xc0009b2000) (3) Data frame handling\nI0404 18:08:00.379065 2585 log.go:172] (0xc0009922c0) Data frame received for 1\nI0404 18:08:00.379099 2585 log.go:172] (0xc0006c9220) (1) Data frame handling\nI0404 18:08:00.379127 2585 log.go:172] (0xc0006c9220) (1) Data frame sent\nI0404 18:08:00.379172 2585 log.go:172] (0xc0009922c0) (0xc0006c9220) Stream removed, broadcasting: 1\nI0404 18:08:00.379200 2585 log.go:172] (0xc0009922c0) Go away received\nI0404 18:08:00.379586 2585 log.go:172] (0xc0009922c0) (0xc0006c9220) Stream removed, broadcasting: 1\nI0404 18:08:00.379615 2585 log.go:172] (0xc0009922c0) (0xc0009b2000) Stream removed, broadcasting: 3\nI0404 18:08:00.379628 2585 log.go:172] (0xc0009922c0) (0xc000946000) Stream removed, broadcasting: 5\n" Apr 4 18:08:00.384: INFO: stdout: "" Apr 4 18:08:00.385: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-7843 execpod54wfc -- /bin/sh -x -c nc -zv -t -w 2 10.96.55.39 80' Apr 4 18:08:00.588: INFO: stderr: "I0404 18:08:00.515551 2606 log.go:172] (0xc00077a9a0) (0xc00079e140) Create stream\nI0404 18:08:00.515608 2606 log.go:172] (0xc00077a9a0) (0xc00079e140) Stream added, broadcasting: 1\nI0404 18:08:00.517888 2606 log.go:172] (0xc00077a9a0) Reply frame received for 1\nI0404 18:08:00.517994 2606 log.go:172] (0xc00077a9a0) (0xc000662fa0) Create stream\nI0404 18:08:00.518012 2606 log.go:172] (0xc00077a9a0) (0xc000662fa0) Stream added, broadcasting: 3\nI0404 18:08:00.519017 2606 log.go:172] (0xc00077a9a0) Reply frame received for 3\nI0404 18:08:00.519043 2606 log.go:172] (0xc00077a9a0) (0xc000663180) Create stream\nI0404 18:08:00.519051 2606 log.go:172] (0xc00077a9a0) (0xc000663180) Stream added, broadcasting: 5\nI0404 18:08:00.519830 2606 log.go:172] (0xc00077a9a0) Reply frame received for 5\nI0404 18:08:00.582593 2606 log.go:172] (0xc00077a9a0) Data frame received for 5\nI0404 18:08:00.582642 2606 log.go:172] (0xc000663180) (5) Data frame handling\nI0404 18:08:00.582652 2606 log.go:172] (0xc000663180) (5) Data frame sent\nI0404 18:08:00.582660 2606 log.go:172] (0xc00077a9a0) Data frame received for 5\nI0404 18:08:00.582671 2606 log.go:172] (0xc000663180) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.55.39 80\nConnection to 10.96.55.39 80 port [tcp/http] succeeded!\nI0404 18:08:00.582690 2606 log.go:172] (0xc00077a9a0) Data frame received for 3\nI0404 18:08:00.582697 2606 log.go:172] (0xc000662fa0) (3) Data frame handling\nI0404 18:08:00.584587 2606 log.go:172] (0xc00077a9a0) Data frame received for 1\nI0404 18:08:00.584607 2606 log.go:172] (0xc00079e140) (1) Data frame handling\nI0404 18:08:00.584623 2606 log.go:172] (0xc00079e140) (1) Data frame sent\nI0404 18:08:00.584634 2606 log.go:172] (0xc00077a9a0) (0xc00079e140) Stream removed, broadcasting: 1\nI0404 18:08:00.584654 2606 log.go:172] (0xc00077a9a0) Go away received\nI0404 18:08:00.584946 2606 log.go:172] (0xc00077a9a0) (0xc00079e140) Stream removed, broadcasting: 1\nI0404 18:08:00.584959 2606 log.go:172] (0xc00077a9a0) (0xc000662fa0) Stream removed, broadcasting: 3\nI0404 18:08:00.584964 2606 log.go:172] (0xc00077a9a0) (0xc000663180) Stream removed, broadcasting: 5\n" Apr 4 18:08:00.588: INFO: stdout: "" Apr 4 18:08:00.589: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-7843 execpod54wfc -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.13 31644' Apr 4 18:08:00.799: INFO: stderr: "I0404 18:08:00.728544 2627 log.go:172] (0xc00003a160) (0xc000818fa0) Create stream\nI0404 18:08:00.728604 2627 log.go:172] (0xc00003a160) (0xc000818fa0) Stream added, broadcasting: 1\nI0404 18:08:00.730890 2627 log.go:172] (0xc00003a160) Reply frame received for 1\nI0404 18:08:00.730932 2627 log.go:172] (0xc00003a160) (0xc000a2e000) Create stream\nI0404 18:08:00.730945 2627 log.go:172] (0xc00003a160) (0xc000a2e000) Stream added, broadcasting: 3\nI0404 18:08:00.731906 2627 log.go:172] (0xc00003a160) Reply frame received for 3\nI0404 18:08:00.731939 2627 log.go:172] (0xc00003a160) (0xc000819180) Create stream\nI0404 18:08:00.731949 2627 log.go:172] (0xc00003a160) (0xc000819180) Stream added, broadcasting: 5\nI0404 18:08:00.732959 2627 log.go:172] (0xc00003a160) Reply frame received for 5\nI0404 18:08:00.791963 2627 log.go:172] (0xc00003a160) Data frame received for 5\nI0404 18:08:00.791995 2627 log.go:172] (0xc000819180) (5) Data frame handling\nI0404 18:08:00.792007 2627 log.go:172] (0xc000819180) (5) Data frame sent\nI0404 18:08:00.792014 2627 log.go:172] (0xc00003a160) Data frame received for 5\nI0404 18:08:00.792021 2627 log.go:172] (0xc000819180) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.13 31644\nConnection to 172.17.0.13 31644 port [tcp/31644] succeeded!\nI0404 18:08:00.792060 2627 log.go:172] (0xc000819180) (5) Data frame sent\nI0404 18:08:00.792232 2627 log.go:172] (0xc00003a160) Data frame received for 3\nI0404 18:08:00.792266 2627 log.go:172] (0xc000a2e000) (3) Data frame handling\nI0404 18:08:00.792426 2627 log.go:172] (0xc00003a160) Data frame received for 5\nI0404 18:08:00.792444 2627 log.go:172] (0xc000819180) (5) Data frame handling\nI0404 18:08:00.794434 2627 log.go:172] (0xc00003a160) Data frame received for 1\nI0404 18:08:00.794455 2627 log.go:172] (0xc000818fa0) (1) Data frame handling\nI0404 18:08:00.794466 2627 log.go:172] (0xc000818fa0) (1) Data frame sent\nI0404 18:08:00.794475 2627 log.go:172] (0xc00003a160) (0xc000818fa0) Stream removed, broadcasting: 1\nI0404 18:08:00.794487 2627 log.go:172] (0xc00003a160) Go away received\nI0404 18:08:00.794823 2627 log.go:172] (0xc00003a160) (0xc000818fa0) Stream removed, broadcasting: 1\nI0404 18:08:00.794839 2627 log.go:172] (0xc00003a160) (0xc000a2e000) Stream removed, broadcasting: 3\nI0404 18:08:00.794845 2627 log.go:172] (0xc00003a160) (0xc000819180) Stream removed, broadcasting: 5\n" Apr 4 18:08:00.799: INFO: stdout: "" Apr 4 18:08:00.799: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-7843 execpod54wfc -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.12 31644' Apr 4 18:08:01.016: INFO: stderr: "I0404 18:08:00.943433 2650 log.go:172] (0xc0003d51e0) (0xc00054b400) Create stream\nI0404 18:08:00.943493 2650 log.go:172] (0xc0003d51e0) (0xc00054b400) Stream added, broadcasting: 1\nI0404 18:08:00.946439 2650 log.go:172] (0xc0003d51e0) Reply frame received for 1\nI0404 18:08:00.946510 2650 log.go:172] (0xc0003d51e0) (0xc000794000) Create stream\nI0404 18:08:00.946614 2650 log.go:172] (0xc0003d51e0) (0xc000794000) Stream added, broadcasting: 3\nI0404 18:08:00.947605 2650 log.go:172] (0xc0003d51e0) Reply frame received for 3\nI0404 18:08:00.947699 2650 log.go:172] (0xc0003d51e0) (0xc000794140) Create stream\nI0404 18:08:00.947767 2650 log.go:172] (0xc0003d51e0) (0xc000794140) Stream added, broadcasting: 5\nI0404 18:08:00.948580 2650 log.go:172] (0xc0003d51e0) Reply frame received for 5\nI0404 18:08:01.011508 2650 log.go:172] (0xc0003d51e0) Data frame received for 5\nI0404 18:08:01.011544 2650 log.go:172] (0xc000794140) (5) Data frame handling\nI0404 18:08:01.011557 2650 log.go:172] (0xc000794140) (5) Data frame sent\nI0404 18:08:01.011572 2650 log.go:172] (0xc0003d51e0) Data frame received for 5\nI0404 18:08:01.011583 2650 log.go:172] (0xc000794140) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.12 31644\nConnection to 172.17.0.12 31644 port [tcp/31644] succeeded!\nI0404 18:08:01.011606 2650 log.go:172] (0xc0003d51e0) Data frame received for 3\nI0404 18:08:01.011612 2650 log.go:172] (0xc000794000) (3) Data frame handling\nI0404 18:08:01.012874 2650 log.go:172] (0xc0003d51e0) Data frame received for 1\nI0404 18:08:01.012894 2650 log.go:172] (0xc00054b400) (1) Data frame handling\nI0404 18:08:01.012908 2650 log.go:172] (0xc00054b400) (1) Data frame sent\nI0404 18:08:01.012921 2650 log.go:172] (0xc0003d51e0) (0xc00054b400) Stream removed, broadcasting: 1\nI0404 18:08:01.012936 2650 log.go:172] (0xc0003d51e0) Go away received\nI0404 18:08:01.013317 2650 log.go:172] (0xc0003d51e0) (0xc00054b400) Stream removed, broadcasting: 1\nI0404 18:08:01.013331 2650 log.go:172] (0xc0003d51e0) (0xc000794000) Stream removed, broadcasting: 3\nI0404 18:08:01.013336 2650 log.go:172] (0xc0003d51e0) (0xc000794140) Stream removed, broadcasting: 5\n" Apr 4 18:08:01.016: INFO: stdout: "" [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:08:01.017: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7843" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:12.105 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":281,"completed":129,"skipped":2148,"failed":0} SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:08:01.024: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:249 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: validating cluster-info Apr 4 18:08:01.300: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config cluster-info' Apr 4 18:08:01.463: INFO: stderr: "" Apr 4 18:08:01.463: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32771\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32771/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:08:01.463: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9585" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance]","total":281,"completed":130,"skipped":2158,"failed":0} SSSSS ------------------------------ [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:08:01.470: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:08:01.703: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8841" for this suite. •{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":281,"completed":131,"skipped":2163,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:08:01.711: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:08:08.795: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4717" for this suite. • [SLOW TEST:7.093 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":281,"completed":132,"skipped":2193,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:08:08.805: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-ab70f497-e06c-4ea8-bd73-0f5ee8d558b2 STEP: Creating a pod to test consume configMaps Apr 4 18:08:08.885: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b2afa1f7-4fc6-4287-b80d-37db39033f91" in namespace "projected-7963" to be "Succeeded or Failed" Apr 4 18:08:08.905: INFO: Pod "pod-projected-configmaps-b2afa1f7-4fc6-4287-b80d-37db39033f91": Phase="Pending", Reason="", readiness=false. Elapsed: 19.727148ms Apr 4 18:08:10.908: INFO: Pod "pod-projected-configmaps-b2afa1f7-4fc6-4287-b80d-37db39033f91": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023535029s Apr 4 18:08:12.912: INFO: Pod "pod-projected-configmaps-b2afa1f7-4fc6-4287-b80d-37db39033f91": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027394269s STEP: Saw pod success Apr 4 18:08:12.912: INFO: Pod "pod-projected-configmaps-b2afa1f7-4fc6-4287-b80d-37db39033f91" satisfied condition "Succeeded or Failed" Apr 4 18:08:12.916: INFO: Trying to get logs from node latest-worker2 pod pod-projected-configmaps-b2afa1f7-4fc6-4287-b80d-37db39033f91 container projected-configmap-volume-test: STEP: delete the pod Apr 4 18:08:12.992: INFO: Waiting for pod pod-projected-configmaps-b2afa1f7-4fc6-4287-b80d-37db39033f91 to disappear Apr 4 18:08:13.015: INFO: Pod pod-projected-configmaps-b2afa1f7-4fc6-4287-b80d-37db39033f91 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:08:13.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7963" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":281,"completed":133,"skipped":2208,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:08:13.022: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-secret-447m STEP: Creating a pod to test atomic-volume-subpath Apr 4 18:08:13.138: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-447m" in namespace "subpath-1821" to be "Succeeded or Failed" Apr 4 18:08:13.142: INFO: Pod "pod-subpath-test-secret-447m": Phase="Pending", Reason="", readiness=false. Elapsed: 3.838149ms Apr 4 18:08:15.146: INFO: Pod "pod-subpath-test-secret-447m": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007908268s Apr 4 18:08:17.150: INFO: Pod "pod-subpath-test-secret-447m": Phase="Running", Reason="", readiness=true. Elapsed: 4.012246828s Apr 4 18:08:19.154: INFO: Pod "pod-subpath-test-secret-447m": Phase="Running", Reason="", readiness=true. Elapsed: 6.01669476s Apr 4 18:08:21.158: INFO: Pod "pod-subpath-test-secret-447m": Phase="Running", Reason="", readiness=true. Elapsed: 8.020564218s Apr 4 18:08:23.163: INFO: Pod "pod-subpath-test-secret-447m": Phase="Running", Reason="", readiness=true. Elapsed: 10.025145066s Apr 4 18:08:25.167: INFO: Pod "pod-subpath-test-secret-447m": Phase="Running", Reason="", readiness=true. Elapsed: 12.028781093s Apr 4 18:08:27.171: INFO: Pod "pod-subpath-test-secret-447m": Phase="Running", Reason="", readiness=true. Elapsed: 14.032891052s Apr 4 18:08:29.175: INFO: Pod "pod-subpath-test-secret-447m": Phase="Running", Reason="", readiness=true. Elapsed: 16.037075161s Apr 4 18:08:31.189: INFO: Pod "pod-subpath-test-secret-447m": Phase="Running", Reason="", readiness=true. Elapsed: 18.051321106s Apr 4 18:08:33.193: INFO: Pod "pod-subpath-test-secret-447m": Phase="Running", Reason="", readiness=true. Elapsed: 20.055722308s Apr 4 18:08:35.198: INFO: Pod "pod-subpath-test-secret-447m": Phase="Running", Reason="", readiness=true. Elapsed: 22.060553691s Apr 4 18:08:37.203: INFO: Pod "pod-subpath-test-secret-447m": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.064945382s STEP: Saw pod success Apr 4 18:08:37.203: INFO: Pod "pod-subpath-test-secret-447m" satisfied condition "Succeeded or Failed" Apr 4 18:08:37.206: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-secret-447m container test-container-subpath-secret-447m: STEP: delete the pod Apr 4 18:08:37.240: INFO: Waiting for pod pod-subpath-test-secret-447m to disappear Apr 4 18:08:37.255: INFO: Pod pod-subpath-test-secret-447m no longer exists STEP: Deleting pod pod-subpath-test-secret-447m Apr 4 18:08:37.255: INFO: Deleting pod "pod-subpath-test-secret-447m" in namespace "subpath-1821" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:08:37.259: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-1821" for this suite. • [SLOW TEST:24.245 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":281,"completed":134,"skipped":2221,"failed":0} SS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:08:37.267: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-7939.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-7939.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7939.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-7939.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-7939.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7939.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 4 18:08:43.394: INFO: DNS probes using dns-7939/dns-test-6b78d690-36f4-4bdb-953a-cf1e14e76880 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:08:43.419: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-7939" for this suite. • [SLOW TEST:6.186 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":281,"completed":135,"skipped":2223,"failed":0} SS ------------------------------ [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:08:43.453: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6222 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-6222;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6222 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-6222;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6222.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-6222.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6222.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-6222.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-6222.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-6222.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-6222.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-6222.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-6222.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-6222.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-6222.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-6222.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6222.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 69.54.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.54.69_udp@PTR;check="$$(dig +tcp +noall +answer +search 69.54.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.54.69_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6222 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-6222;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6222 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-6222;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6222.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-6222.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6222.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-6222.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-6222.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-6222.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-6222.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-6222.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-6222.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-6222.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-6222.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-6222.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6222.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 69.54.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.54.69_udp@PTR;check="$$(dig +tcp +noall +answer +search 69.54.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.54.69_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 4 18:08:52.293: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-6222/dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951: the server could not find the requested resource (get pods dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951) Apr 4 18:08:52.296: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-6222/dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951: the server could not find the requested resource (get pods dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951) Apr 4 18:08:52.299: INFO: Unable to read wheezy_udp@dns-test-service.dns-6222 from pod dns-6222/dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951: the server could not find the requested resource (get pods dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951) Apr 4 18:08:52.302: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6222 from pod dns-6222/dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951: the server could not find the requested resource (get pods dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951) Apr 4 18:08:52.304: INFO: Unable to read wheezy_udp@dns-test-service.dns-6222.svc from pod dns-6222/dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951: the server could not find the requested resource (get pods dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951) Apr 4 18:08:52.308: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6222.svc from pod dns-6222/dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951: the server could not find the requested resource (get pods dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951) Apr 4 18:08:52.311: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6222.svc from pod dns-6222/dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951: the server could not find the requested resource (get pods dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951) Apr 4 18:08:52.314: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6222.svc from pod dns-6222/dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951: the server could not find the requested resource (get pods dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951) Apr 4 18:08:52.338: INFO: Unable to read jessie_udp@dns-test-service from pod dns-6222/dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951: the server could not find the requested resource (get pods dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951) Apr 4 18:08:52.342: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-6222/dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951: the server could not find the requested resource (get pods dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951) Apr 4 18:08:52.344: INFO: Unable to read jessie_udp@dns-test-service.dns-6222 from pod dns-6222/dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951: the server could not find the requested resource (get pods dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951) Apr 4 18:08:52.347: INFO: Unable to read jessie_tcp@dns-test-service.dns-6222 from pod dns-6222/dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951: the server could not find the requested resource (get pods dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951) Apr 4 18:08:52.350: INFO: Unable to read jessie_udp@dns-test-service.dns-6222.svc from pod dns-6222/dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951: the server could not find the requested resource (get pods dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951) Apr 4 18:08:52.354: INFO: Unable to read jessie_tcp@dns-test-service.dns-6222.svc from pod dns-6222/dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951: the server could not find the requested resource (get pods dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951) Apr 4 18:08:52.356: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6222.svc from pod dns-6222/dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951: the server could not find the requested resource (get pods dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951) Apr 4 18:08:52.359: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6222.svc from pod dns-6222/dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951: the server could not find the requested resource (get pods dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951) Apr 4 18:08:52.381: INFO: Lookups using dns-6222/dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-6222 wheezy_tcp@dns-test-service.dns-6222 wheezy_udp@dns-test-service.dns-6222.svc wheezy_tcp@dns-test-service.dns-6222.svc wheezy_udp@_http._tcp.dns-test-service.dns-6222.svc wheezy_tcp@_http._tcp.dns-test-service.dns-6222.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-6222 jessie_tcp@dns-test-service.dns-6222 jessie_udp@dns-test-service.dns-6222.svc jessie_tcp@dns-test-service.dns-6222.svc jessie_udp@_http._tcp.dns-test-service.dns-6222.svc jessie_tcp@_http._tcp.dns-test-service.dns-6222.svc] Apr 4 18:08:57.537: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-6222/dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951: the server could not find the requested resource (get pods dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951) Apr 4 18:08:57.562: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-6222/dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951: the server could not find the requested resource (get pods dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951) Apr 4 18:08:57.565: INFO: Unable to read wheezy_udp@dns-test-service.dns-6222 from pod dns-6222/dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951: the server could not find the requested resource (get pods dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951) Apr 4 18:08:57.569: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6222 from pod dns-6222/dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951: the server could not find the requested resource (get pods dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951) Apr 4 18:08:57.572: INFO: Unable to read wheezy_udp@dns-test-service.dns-6222.svc from pod dns-6222/dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951: the server could not find the requested resource (get pods dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951) Apr 4 18:08:57.576: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6222.svc from pod dns-6222/dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951: the server could not find the requested resource (get pods dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951) Apr 4 18:08:57.579: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6222.svc from pod dns-6222/dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951: the server could not find the requested resource (get pods dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951) Apr 4 18:08:57.582: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6222.svc from pod dns-6222/dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951: the server could not find the requested resource (get pods dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951) Apr 4 18:08:57.620: INFO: Unable to read jessie_udp@dns-test-service from pod dns-6222/dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951: the server could not find the requested resource (get pods dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951) Apr 4 18:08:57.623: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-6222/dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951: the server could not find the requested resource (get pods dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951) Apr 4 18:08:57.626: INFO: Unable to read jessie_udp@dns-test-service.dns-6222 from pod dns-6222/dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951: the server could not find the requested resource (get pods dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951) Apr 4 18:08:57.628: INFO: Unable to read jessie_tcp@dns-test-service.dns-6222 from pod dns-6222/dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951: the server could not find the requested resource (get pods dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951) Apr 4 18:08:57.631: INFO: Unable to read jessie_udp@dns-test-service.dns-6222.svc from pod dns-6222/dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951: the server could not find the requested resource (get pods dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951) Apr 4 18:08:57.634: INFO: Unable to read jessie_tcp@dns-test-service.dns-6222.svc from pod dns-6222/dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951: the server could not find the requested resource (get pods dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951) Apr 4 18:08:57.636: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6222.svc from pod dns-6222/dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951: the server could not find the requested resource (get pods dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951) Apr 4 18:08:57.669: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6222.svc from pod dns-6222/dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951: the server could not find the requested resource (get pods dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951) Apr 4 18:08:57.687: INFO: Lookups using dns-6222/dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-6222 wheezy_tcp@dns-test-service.dns-6222 wheezy_udp@dns-test-service.dns-6222.svc wheezy_tcp@dns-test-service.dns-6222.svc wheezy_udp@_http._tcp.dns-test-service.dns-6222.svc wheezy_tcp@_http._tcp.dns-test-service.dns-6222.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-6222 jessie_tcp@dns-test-service.dns-6222 jessie_udp@dns-test-service.dns-6222.svc jessie_tcp@dns-test-service.dns-6222.svc jessie_udp@_http._tcp.dns-test-service.dns-6222.svc jessie_tcp@_http._tcp.dns-test-service.dns-6222.svc] Apr 4 18:09:02.386: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-6222/dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951: the server could not find the requested resource (get pods dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951) Apr 4 18:09:02.390: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-6222/dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951: the server could not find the requested resource (get pods dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951) Apr 4 18:09:02.394: INFO: Unable to read wheezy_udp@dns-test-service.dns-6222 from pod dns-6222/dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951: the server could not find the requested resource (get pods dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951) Apr 4 18:09:02.397: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6222 from pod dns-6222/dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951: the server could not find the requested resource (get pods dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951) Apr 4 18:09:02.400: INFO: Unable to read wheezy_udp@dns-test-service.dns-6222.svc from pod dns-6222/dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951: the server could not find the requested resource (get pods dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951) Apr 4 18:09:02.403: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6222.svc from pod dns-6222/dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951: the server could not find the requested resource (get pods dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951) Apr 4 18:09:02.406: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6222.svc from pod dns-6222/dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951: the server could not find the requested resource (get pods dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951) Apr 4 18:09:02.409: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6222.svc from pod dns-6222/dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951: the server could not find the requested resource (get pods dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951) Apr 4 18:09:02.430: INFO: Unable to read jessie_udp@dns-test-service from pod dns-6222/dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951: the server could not find the requested resource (get pods dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951) Apr 4 18:09:02.432: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-6222/dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951: the server could not find the requested resource (get pods dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951) Apr 4 18:09:02.434: INFO: Unable to read jessie_udp@dns-test-service.dns-6222 from pod dns-6222/dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951: the server could not find the requested resource (get pods dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951) Apr 4 18:09:02.437: INFO: Unable to read jessie_tcp@dns-test-service.dns-6222 from pod dns-6222/dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951: the server could not find the requested resource (get pods dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951) Apr 4 18:09:02.439: INFO: Unable to read jessie_udp@dns-test-service.dns-6222.svc from pod dns-6222/dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951: the server could not find the requested resource (get pods dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951) Apr 4 18:09:02.442: INFO: Unable to read jessie_tcp@dns-test-service.dns-6222.svc from pod dns-6222/dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951: the server could not find the requested resource (get pods dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951) Apr 4 18:09:02.444: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6222.svc from pod dns-6222/dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951: the server could not find the requested resource (get pods dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951) Apr 4 18:09:02.447: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6222.svc from pod dns-6222/dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951: the server could not find the requested resource (get pods dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951) Apr 4 18:09:02.464: INFO: Lookups using dns-6222/dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-6222 wheezy_tcp@dns-test-service.dns-6222 wheezy_udp@dns-test-service.dns-6222.svc wheezy_tcp@dns-test-service.dns-6222.svc wheezy_udp@_http._tcp.dns-test-service.dns-6222.svc wheezy_tcp@_http._tcp.dns-test-service.dns-6222.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-6222 jessie_tcp@dns-test-service.dns-6222 jessie_udp@dns-test-service.dns-6222.svc jessie_tcp@dns-test-service.dns-6222.svc jessie_udp@_http._tcp.dns-test-service.dns-6222.svc jessie_tcp@_http._tcp.dns-test-service.dns-6222.svc] Apr 4 18:09:07.400: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-6222/dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951: the server could not find the requested resource (get pods dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951) Apr 4 18:09:07.402: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-6222/dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951: the server could not find the requested resource (get pods dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951) Apr 4 18:09:07.515: INFO: Unable to read wheezy_udp@dns-test-service.dns-6222 from pod dns-6222/dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951: the server could not find the requested resource (get pods dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951) Apr 4 18:09:07.518: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6222 from pod dns-6222/dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951: the server could not find the requested resource (get pods dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951) Apr 4 18:09:07.522: INFO: Unable to read wheezy_udp@dns-test-service.dns-6222.svc from pod dns-6222/dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951: the server could not find the requested resource (get pods dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951) Apr 4 18:09:07.524: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6222.svc from pod dns-6222/dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951: the server could not find the requested resource (get pods dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951) Apr 4 18:09:07.527: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6222.svc from pod dns-6222/dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951: the server could not find the requested resource (get pods dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951) Apr 4 18:09:07.529: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6222.svc from pod dns-6222/dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951: the server could not find the requested resource (get pods dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951) Apr 4 18:09:07.554: INFO: Unable to read jessie_udp@dns-test-service from pod dns-6222/dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951: the server could not find the requested resource (get pods dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951) Apr 4 18:09:07.556: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-6222/dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951: the server could not find the requested resource (get pods dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951) Apr 4 18:09:07.558: INFO: Unable to read jessie_udp@dns-test-service.dns-6222 from pod dns-6222/dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951: the server could not find the requested resource (get pods dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951) Apr 4 18:09:07.560: INFO: Unable to read jessie_tcp@dns-test-service.dns-6222 from pod dns-6222/dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951: the server could not find the requested resource (get pods dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951) Apr 4 18:09:07.562: INFO: Unable to read jessie_udp@dns-test-service.dns-6222.svc from pod dns-6222/dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951: the server could not find the requested resource (get pods dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951) Apr 4 18:09:07.564: INFO: Unable to read jessie_tcp@dns-test-service.dns-6222.svc from pod dns-6222/dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951: the server could not find the requested resource (get pods dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951) Apr 4 18:09:07.566: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6222.svc from pod dns-6222/dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951: the server could not find the requested resource (get pods dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951) Apr 4 18:09:07.568: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6222.svc from pod dns-6222/dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951: the server could not find the requested resource (get pods dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951) Apr 4 18:09:07.582: INFO: Lookups using dns-6222/dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-6222 wheezy_tcp@dns-test-service.dns-6222 wheezy_udp@dns-test-service.dns-6222.svc wheezy_tcp@dns-test-service.dns-6222.svc wheezy_udp@_http._tcp.dns-test-service.dns-6222.svc wheezy_tcp@_http._tcp.dns-test-service.dns-6222.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-6222 jessie_tcp@dns-test-service.dns-6222 jessie_udp@dns-test-service.dns-6222.svc jessie_tcp@dns-test-service.dns-6222.svc jessie_udp@_http._tcp.dns-test-service.dns-6222.svc jessie_tcp@_http._tcp.dns-test-service.dns-6222.svc] Apr 4 18:09:12.385: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-6222/dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951: the server could not find the requested resource (get pods dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951) Apr 4 18:09:12.388: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-6222/dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951: the server could not find the requested resource (get pods dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951) Apr 4 18:09:12.390: INFO: Unable to read wheezy_udp@dns-test-service.dns-6222 from pod dns-6222/dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951: the server could not find the requested resource (get pods dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951) Apr 4 18:09:12.393: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6222 from pod dns-6222/dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951: the server could not find the requested resource (get pods dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951) Apr 4 18:09:12.395: INFO: Unable to read wheezy_udp@dns-test-service.dns-6222.svc from pod dns-6222/dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951: the server could not find the requested resource (get pods dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951) Apr 4 18:09:12.398: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6222.svc from pod dns-6222/dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951: the server could not find the requested resource (get pods dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951) Apr 4 18:09:12.400: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6222.svc from pod dns-6222/dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951: the server could not find the requested resource (get pods dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951) Apr 4 18:09:12.403: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6222.svc from pod dns-6222/dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951: the server could not find the requested resource (get pods dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951) Apr 4 18:09:12.573: INFO: Unable to read jessie_udp@dns-test-service from pod dns-6222/dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951: the server could not find the requested resource (get pods dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951) Apr 4 18:09:12.575: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-6222/dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951: the server could not find the requested resource (get pods dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951) Apr 4 18:09:12.577: INFO: Unable to read jessie_udp@dns-test-service.dns-6222 from pod dns-6222/dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951: the server could not find the requested resource (get pods dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951) Apr 4 18:09:12.580: INFO: Unable to read jessie_tcp@dns-test-service.dns-6222 from pod dns-6222/dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951: the server could not find the requested resource (get pods dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951) Apr 4 18:09:12.582: INFO: Unable to read jessie_udp@dns-test-service.dns-6222.svc from pod dns-6222/dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951: the server could not find the requested resource (get pods dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951) Apr 4 18:09:12.585: INFO: Unable to read jessie_tcp@dns-test-service.dns-6222.svc from pod dns-6222/dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951: the server could not find the requested resource (get pods dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951) Apr 4 18:09:12.587: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6222.svc from pod dns-6222/dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951: the server could not find the requested resource (get pods dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951) Apr 4 18:09:12.590: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6222.svc from pod dns-6222/dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951: the server could not find the requested resource (get pods dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951) Apr 4 18:09:12.612: INFO: Lookups using dns-6222/dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-6222 wheezy_tcp@dns-test-service.dns-6222 wheezy_udp@dns-test-service.dns-6222.svc wheezy_tcp@dns-test-service.dns-6222.svc wheezy_udp@_http._tcp.dns-test-service.dns-6222.svc wheezy_tcp@_http._tcp.dns-test-service.dns-6222.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-6222 jessie_tcp@dns-test-service.dns-6222 jessie_udp@dns-test-service.dns-6222.svc jessie_tcp@dns-test-service.dns-6222.svc jessie_udp@_http._tcp.dns-test-service.dns-6222.svc jessie_tcp@_http._tcp.dns-test-service.dns-6222.svc] Apr 4 18:09:17.385: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-6222/dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951: the server could not find the requested resource (get pods dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951) Apr 4 18:09:17.388: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-6222/dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951: the server could not find the requested resource (get pods dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951) Apr 4 18:09:17.390: INFO: Unable to read wheezy_udp@dns-test-service.dns-6222 from pod dns-6222/dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951: the server could not find the requested resource (get pods dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951) Apr 4 18:09:17.393: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6222 from pod dns-6222/dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951: the server could not find the requested resource (get pods dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951) Apr 4 18:09:17.395: INFO: Unable to read wheezy_udp@dns-test-service.dns-6222.svc from pod dns-6222/dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951: the server could not find the requested resource (get pods dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951) Apr 4 18:09:17.398: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6222.svc from pod dns-6222/dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951: the server could not find the requested resource (get pods dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951) Apr 4 18:09:17.400: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6222.svc from pod dns-6222/dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951: the server could not find the requested resource (get pods dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951) Apr 4 18:09:17.403: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6222.svc from pod dns-6222/dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951: the server could not find the requested resource (get pods dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951) Apr 4 18:09:17.421: INFO: Unable to read jessie_udp@dns-test-service from pod dns-6222/dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951: the server could not find the requested resource (get pods dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951) Apr 4 18:09:17.423: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-6222/dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951: the server could not find the requested resource (get pods dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951) Apr 4 18:09:17.426: INFO: Unable to read jessie_udp@dns-test-service.dns-6222 from pod dns-6222/dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951: the server could not find the requested resource (get pods dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951) Apr 4 18:09:17.429: INFO: Unable to read jessie_tcp@dns-test-service.dns-6222 from pod dns-6222/dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951: the server could not find the requested resource (get pods dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951) Apr 4 18:09:17.431: INFO: Unable to read jessie_udp@dns-test-service.dns-6222.svc from pod dns-6222/dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951: the server could not find the requested resource (get pods dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951) Apr 4 18:09:17.434: INFO: Unable to read jessie_tcp@dns-test-service.dns-6222.svc from pod dns-6222/dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951: the server could not find the requested resource (get pods dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951) Apr 4 18:09:17.436: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6222.svc from pod dns-6222/dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951: the server could not find the requested resource (get pods dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951) Apr 4 18:09:17.439: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6222.svc from pod dns-6222/dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951: the server could not find the requested resource (get pods dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951) Apr 4 18:09:17.529: INFO: Lookups using dns-6222/dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-6222 wheezy_tcp@dns-test-service.dns-6222 wheezy_udp@dns-test-service.dns-6222.svc wheezy_tcp@dns-test-service.dns-6222.svc wheezy_udp@_http._tcp.dns-test-service.dns-6222.svc wheezy_tcp@_http._tcp.dns-test-service.dns-6222.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-6222 jessie_tcp@dns-test-service.dns-6222 jessie_udp@dns-test-service.dns-6222.svc jessie_tcp@dns-test-service.dns-6222.svc jessie_udp@_http._tcp.dns-test-service.dns-6222.svc jessie_tcp@_http._tcp.dns-test-service.dns-6222.svc] Apr 4 18:09:22.461: INFO: DNS probes using dns-6222/dns-test-d1ad1adb-93b2-4984-9abe-9895fd3ec951 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:09:23.916: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6222" for this suite. • [SLOW TEST:40.535 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":281,"completed":136,"skipped":2225,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:09:23.989: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:10:24.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7061" for this suite. • [SLOW TEST:60.294 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":281,"completed":137,"skipped":2249,"failed":0} [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:10:24.284: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Apr 4 18:10:24.394: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-3223 /api/v1/namespaces/watch-3223/configmaps/e2e-watch-test-resource-version 8cb5c24d-a21e-455f-9b74-826ef99e6c87 5401452 0 2020-04-04 18:10:24 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Apr 4 18:10:24.395: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-3223 /api/v1/namespaces/watch-3223/configmaps/e2e-watch-test-resource-version 8cb5c24d-a21e-455f-9b74-826ef99e6c87 5401453 0 2020-04-04 18:10:24 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:10:24.395: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-3223" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":281,"completed":138,"skipped":2249,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:10:24.408: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-42046a6d-6efb-4224-ad09-853473bbc8bc STEP: Creating a pod to test consume secrets Apr 4 18:10:24.511: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-6937508f-01ca-48e3-9902-3ae397af8f17" in namespace "projected-6943" to be "Succeeded or Failed" Apr 4 18:10:24.515: INFO: Pod "pod-projected-secrets-6937508f-01ca-48e3-9902-3ae397af8f17": Phase="Pending", Reason="", readiness=false. Elapsed: 3.933914ms Apr 4 18:10:26.519: INFO: Pod "pod-projected-secrets-6937508f-01ca-48e3-9902-3ae397af8f17": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008220931s Apr 4 18:10:28.524: INFO: Pod "pod-projected-secrets-6937508f-01ca-48e3-9902-3ae397af8f17": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012779329s STEP: Saw pod success Apr 4 18:10:28.524: INFO: Pod "pod-projected-secrets-6937508f-01ca-48e3-9902-3ae397af8f17" satisfied condition "Succeeded or Failed" Apr 4 18:10:28.527: INFO: Trying to get logs from node latest-worker2 pod pod-projected-secrets-6937508f-01ca-48e3-9902-3ae397af8f17 container projected-secret-volume-test: STEP: delete the pod Apr 4 18:10:28.572: INFO: Waiting for pod pod-projected-secrets-6937508f-01ca-48e3-9902-3ae397af8f17 to disappear Apr 4 18:10:28.593: INFO: Pod pod-projected-secrets-6937508f-01ca-48e3-9902-3ae397af8f17 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:10:28.593: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6943" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":281,"completed":139,"skipped":2257,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:10:28.603: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating secret secrets-6184/secret-test-81ebcdb5-bc4a-4d68-a78d-5141270da9be STEP: Creating a pod to test consume secrets Apr 4 18:10:28.715: INFO: Waiting up to 5m0s for pod "pod-configmaps-d07af2af-a455-4270-84ea-094cab895da1" in namespace "secrets-6184" to be "Succeeded or Failed" Apr 4 18:10:28.719: INFO: Pod "pod-configmaps-d07af2af-a455-4270-84ea-094cab895da1": Phase="Pending", Reason="", readiness=false. Elapsed: 3.684948ms Apr 4 18:10:30.722: INFO: Pod "pod-configmaps-d07af2af-a455-4270-84ea-094cab895da1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006806954s Apr 4 18:10:32.727: INFO: Pod "pod-configmaps-d07af2af-a455-4270-84ea-094cab895da1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011119883s STEP: Saw pod success Apr 4 18:10:32.727: INFO: Pod "pod-configmaps-d07af2af-a455-4270-84ea-094cab895da1" satisfied condition "Succeeded or Failed" Apr 4 18:10:32.730: INFO: Trying to get logs from node latest-worker pod pod-configmaps-d07af2af-a455-4270-84ea-094cab895da1 container env-test: STEP: delete the pod Apr 4 18:10:33.001: INFO: Waiting for pod pod-configmaps-d07af2af-a455-4270-84ea-094cab895da1 to disappear Apr 4 18:10:33.065: INFO: Pod pod-configmaps-d07af2af-a455-4270-84ea-094cab895da1 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:10:33.065: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6184" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":281,"completed":140,"skipped":2302,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:10:33.075: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Apr 4 18:10:33.306: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:10:33.922: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-229" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":281,"completed":141,"skipped":2314,"failed":0} SSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:10:34.011: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:135 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Apr 4 18:10:34.155: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 18:10:34.160: INFO: Number of nodes with available pods: 0 Apr 4 18:10:34.160: INFO: Node latest-worker is running more than one daemon pod Apr 4 18:10:35.165: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 18:10:35.168: INFO: Number of nodes with available pods: 0 Apr 4 18:10:35.168: INFO: Node latest-worker is running more than one daemon pod Apr 4 18:10:36.165: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 18:10:36.170: INFO: Number of nodes with available pods: 0 Apr 4 18:10:36.170: INFO: Node latest-worker is running more than one daemon pod Apr 4 18:10:37.164: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 18:10:37.167: INFO: Number of nodes with available pods: 0 Apr 4 18:10:37.167: INFO: Node latest-worker is running more than one daemon pod Apr 4 18:10:38.245: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 18:10:38.301: INFO: Number of nodes with available pods: 2 Apr 4 18:10:38.301: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Apr 4 18:10:38.325: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 18:10:38.336: INFO: Number of nodes with available pods: 2 Apr 4 18:10:38.336: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:101 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1734, will wait for the garbage collector to delete the pods Apr 4 18:10:39.576: INFO: Deleting DaemonSet.extensions daemon-set took: 6.393411ms Apr 4 18:10:39.876: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.273093ms Apr 4 18:10:53.143: INFO: Number of nodes with available pods: 0 Apr 4 18:10:53.143: INFO: Number of running nodes: 0, number of available pods: 0 Apr 4 18:10:53.146: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1734/daemonsets","resourceVersion":"5401681"},"items":null} Apr 4 18:10:53.148: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1734/pods","resourceVersion":"5401681"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:10:53.175: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-1734" for this suite. • [SLOW TEST:19.171 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":281,"completed":142,"skipped":2320,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:10:53.184: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0404 18:10:54.437317 7 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 4 18:10:54.437: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:10:54.437: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6643" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":281,"completed":143,"skipped":2418,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:10:54.445: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation Apr 4 18:10:54.498: INFO: >>> kubeConfig: /root/.kube/config Apr 4 18:10:56.412: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:11:07.864: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-3965" for this suite. • [SLOW TEST:13.441 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":281,"completed":144,"skipped":2459,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:11:07.886: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test hostPath mode Apr 4 18:11:07.939: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-5234" to be "Succeeded or Failed" Apr 4 18:11:07.942: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 3.027555ms Apr 4 18:11:10.005: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.066725765s Apr 4 18:11:12.010: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 4.070933195s Apr 4 18:11:14.030: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.091140489s STEP: Saw pod success Apr 4 18:11:14.030: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed" Apr 4 18:11:14.046: INFO: Trying to get logs from node latest-worker pod pod-host-path-test container test-container-1: STEP: delete the pod Apr 4 18:11:14.066: INFO: Waiting for pod pod-host-path-test to disappear Apr 4 18:11:14.071: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:11:14.071: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-5234" for this suite. • [SLOW TEST:6.191 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":281,"completed":145,"skipped":2475,"failed":0} SSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:11:14.077: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars Apr 4 18:11:14.174: INFO: Waiting up to 5m0s for pod "downward-api-fdbced92-9cd8-4887-a109-6f7a6f3794f3" in namespace "downward-api-9381" to be "Succeeded or Failed" Apr 4 18:11:14.212: INFO: Pod "downward-api-fdbced92-9cd8-4887-a109-6f7a6f3794f3": Phase="Pending", Reason="", readiness=false. Elapsed: 37.295312ms Apr 4 18:11:16.216: INFO: Pod "downward-api-fdbced92-9cd8-4887-a109-6f7a6f3794f3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041984563s Apr 4 18:11:18.221: INFO: Pod "downward-api-fdbced92-9cd8-4887-a109-6f7a6f3794f3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.046527549s STEP: Saw pod success Apr 4 18:11:18.221: INFO: Pod "downward-api-fdbced92-9cd8-4887-a109-6f7a6f3794f3" satisfied condition "Succeeded or Failed" Apr 4 18:11:18.224: INFO: Trying to get logs from node latest-worker2 pod downward-api-fdbced92-9cd8-4887-a109-6f7a6f3794f3 container dapi-container: STEP: delete the pod Apr 4 18:11:18.264: INFO: Waiting for pod downward-api-fdbced92-9cd8-4887-a109-6f7a6f3794f3 to disappear Apr 4 18:11:18.269: INFO: Pod downward-api-fdbced92-9cd8-4887-a109-6f7a6f3794f3 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:11:18.269: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9381" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":281,"completed":146,"skipped":2478,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:11:18.277: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on node default medium Apr 4 18:11:18.337: INFO: Waiting up to 5m0s for pod "pod-b846eb43-6732-461f-bccf-d70b1c337758" in namespace "emptydir-1058" to be "Succeeded or Failed" Apr 4 18:11:18.344: INFO: Pod "pod-b846eb43-6732-461f-bccf-d70b1c337758": Phase="Pending", Reason="", readiness=false. Elapsed: 6.886296ms Apr 4 18:11:20.348: INFO: Pod "pod-b846eb43-6732-461f-bccf-d70b1c337758": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011180507s Apr 4 18:11:22.352: INFO: Pod "pod-b846eb43-6732-461f-bccf-d70b1c337758": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01515946s STEP: Saw pod success Apr 4 18:11:22.352: INFO: Pod "pod-b846eb43-6732-461f-bccf-d70b1c337758" satisfied condition "Succeeded or Failed" Apr 4 18:11:22.355: INFO: Trying to get logs from node latest-worker2 pod pod-b846eb43-6732-461f-bccf-d70b1c337758 container test-container: STEP: delete the pod Apr 4 18:11:22.388: INFO: Waiting for pod pod-b846eb43-6732-461f-bccf-d70b1c337758 to disappear Apr 4 18:11:22.403: INFO: Pod pod-b846eb43-6732-461f-bccf-d70b1c337758 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:11:22.403: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1058" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":281,"completed":147,"skipped":2497,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:11:22.412: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 4 18:11:23.214: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 4 18:11:25.221: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721620683, loc:(*time.Location)(0x7bcb460)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721620683, loc:(*time.Location)(0x7bcb460)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721620683, loc:(*time.Location)(0x7bcb460)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721620683, loc:(*time.Location)(0x7bcb460)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 4 18:11:28.250: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook Apr 4 18:11:32.306: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config attach --namespace=webhook-6134 to-be-attached-pod -i -c=container1' Apr 4 18:11:32.438: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:11:32.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6134" for this suite. STEP: Destroying namespace "webhook-6134-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:10.114 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":281,"completed":148,"skipped":2504,"failed":0} [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:11:32.526: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:11:36.664: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-9383" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":281,"completed":149,"skipped":2504,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:11:36.672: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-2021 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 4 18:11:36.715: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Apr 4 18:11:36.821: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 4 18:11:38.825: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 4 18:11:40.825: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 4 18:11:42.825: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 4 18:11:44.825: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 4 18:11:46.824: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 4 18:11:48.825: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 4 18:11:50.826: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 4 18:11:52.825: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 4 18:11:54.825: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 4 18:11:56.838: INFO: The status of Pod netserver-0 is Running (Ready = true) Apr 4 18:11:56.842: INFO: The status of Pod netserver-1 is Running (Ready = false) Apr 4 18:11:58.847: INFO: The status of Pod netserver-1 is Running (Ready = false) Apr 4 18:12:00.847: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Apr 4 18:12:04.982: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.201:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-2021 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 4 18:12:04.982: INFO: >>> kubeConfig: /root/.kube/config I0404 18:12:05.037752 7 log.go:172] (0xc00288e420) (0xc001b4f360) Create stream I0404 18:12:05.037782 7 log.go:172] (0xc00288e420) (0xc001b4f360) Stream added, broadcasting: 1 I0404 18:12:05.039282 7 log.go:172] (0xc00288e420) Reply frame received for 1 I0404 18:12:05.039309 7 log.go:172] (0xc00288e420) (0xc0029bbcc0) Create stream I0404 18:12:05.039323 7 log.go:172] (0xc00288e420) (0xc0029bbcc0) Stream added, broadcasting: 3 I0404 18:12:05.039977 7 log.go:172] (0xc00288e420) Reply frame received for 3 I0404 18:12:05.040003 7 log.go:172] (0xc00288e420) (0xc001326140) Create stream I0404 18:12:05.040013 7 log.go:172] (0xc00288e420) (0xc001326140) Stream added, broadcasting: 5 I0404 18:12:05.040589 7 log.go:172] (0xc00288e420) Reply frame received for 5 I0404 18:12:05.119603 7 log.go:172] (0xc00288e420) Data frame received for 5 I0404 18:12:05.119637 7 log.go:172] (0xc001326140) (5) Data frame handling I0404 18:12:05.119656 7 log.go:172] (0xc00288e420) Data frame received for 3 I0404 18:12:05.119670 7 log.go:172] (0xc0029bbcc0) (3) Data frame handling I0404 18:12:05.119683 7 log.go:172] (0xc0029bbcc0) (3) Data frame sent I0404 18:12:05.119692 7 log.go:172] (0xc00288e420) Data frame received for 3 I0404 18:12:05.119699 7 log.go:172] (0xc0029bbcc0) (3) Data frame handling I0404 18:12:05.121215 7 log.go:172] (0xc00288e420) Data frame received for 1 I0404 18:12:05.121236 7 log.go:172] (0xc001b4f360) (1) Data frame handling I0404 18:12:05.121250 7 log.go:172] (0xc001b4f360) (1) Data frame sent I0404 18:12:05.121263 7 log.go:172] (0xc00288e420) (0xc001b4f360) Stream removed, broadcasting: 1 I0404 18:12:05.121280 7 log.go:172] (0xc00288e420) Go away received I0404 18:12:05.121375 7 log.go:172] (0xc00288e420) (0xc001b4f360) Stream removed, broadcasting: 1 I0404 18:12:05.121387 7 log.go:172] (0xc00288e420) (0xc0029bbcc0) Stream removed, broadcasting: 3 I0404 18:12:05.121395 7 log.go:172] (0xc00288e420) (0xc001326140) Stream removed, broadcasting: 5 Apr 4 18:12:05.121: INFO: Found all expected endpoints: [netserver-0] Apr 4 18:12:05.124: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.25:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-2021 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 4 18:12:05.124: INFO: >>> kubeConfig: /root/.kube/config I0404 18:12:05.145435 7 log.go:172] (0xc0022eed10) (0xc0012101e0) Create stream I0404 18:12:05.145457 7 log.go:172] (0xc0022eed10) (0xc0012101e0) Stream added, broadcasting: 1 I0404 18:12:05.153643 7 log.go:172] (0xc0022eed10) Reply frame received for 1 I0404 18:12:05.153704 7 log.go:172] (0xc0022eed10) (0xc001210280) Create stream I0404 18:12:05.153727 7 log.go:172] (0xc0022eed10) (0xc001210280) Stream added, broadcasting: 3 I0404 18:12:05.155042 7 log.go:172] (0xc0022eed10) Reply frame received for 3 I0404 18:12:05.155067 7 log.go:172] (0xc0022eed10) (0xc001737f40) Create stream I0404 18:12:05.155076 7 log.go:172] (0xc0022eed10) (0xc001737f40) Stream added, broadcasting: 5 I0404 18:12:05.157049 7 log.go:172] (0xc0022eed10) Reply frame received for 5 I0404 18:12:05.228281 7 log.go:172] (0xc0022eed10) Data frame received for 3 I0404 18:12:05.228328 7 log.go:172] (0xc001210280) (3) Data frame handling I0404 18:12:05.228359 7 log.go:172] (0xc001210280) (3) Data frame sent I0404 18:12:05.228391 7 log.go:172] (0xc0022eed10) Data frame received for 5 I0404 18:12:05.228414 7 log.go:172] (0xc001737f40) (5) Data frame handling I0404 18:12:05.228506 7 log.go:172] (0xc0022eed10) Data frame received for 3 I0404 18:12:05.228526 7 log.go:172] (0xc001210280) (3) Data frame handling I0404 18:12:05.230544 7 log.go:172] (0xc0022eed10) Data frame received for 1 I0404 18:12:05.230578 7 log.go:172] (0xc0012101e0) (1) Data frame handling I0404 18:12:05.230631 7 log.go:172] (0xc0012101e0) (1) Data frame sent I0404 18:12:05.230645 7 log.go:172] (0xc0022eed10) (0xc0012101e0) Stream removed, broadcasting: 1 I0404 18:12:05.230659 7 log.go:172] (0xc0022eed10) Go away received I0404 18:12:05.230796 7 log.go:172] (0xc0022eed10) (0xc0012101e0) Stream removed, broadcasting: 1 I0404 18:12:05.230821 7 log.go:172] (0xc0022eed10) (0xc001210280) Stream removed, broadcasting: 3 I0404 18:12:05.230840 7 log.go:172] (0xc0022eed10) (0xc001737f40) Stream removed, broadcasting: 5 Apr 4 18:12:05.230: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:12:05.230: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-2021" for this suite. • [SLOW TEST:28.567 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":281,"completed":150,"skipped":2554,"failed":0} S ------------------------------ [k8s.io] Variable Expansion should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:12:05.240: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: waiting for pod running STEP: creating a file in subpath Apr 4 18:12:09.344: INFO: ExecWithOptions {Command:[/bin/sh -c touch /volume_mount/mypath/foo/test.log] Namespace:var-expansion-5445 PodName:var-expansion-bc398538-3d41-4f06-bbcb-1407ab18fa37 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 4 18:12:09.344: INFO: >>> kubeConfig: /root/.kube/config I0404 18:12:09.383086 7 log.go:172] (0xc00288edc0) (0xc001116be0) Create stream I0404 18:12:09.383123 7 log.go:172] (0xc00288edc0) (0xc001116be0) Stream added, broadcasting: 1 I0404 18:12:09.384990 7 log.go:172] (0xc00288edc0) Reply frame received for 1 I0404 18:12:09.385050 7 log.go:172] (0xc00288edc0) (0xc000434280) Create stream I0404 18:12:09.385072 7 log.go:172] (0xc00288edc0) (0xc000434280) Stream added, broadcasting: 3 I0404 18:12:09.386165 7 log.go:172] (0xc00288edc0) Reply frame received for 3 I0404 18:12:09.386199 7 log.go:172] (0xc00288edc0) (0xc001116d20) Create stream I0404 18:12:09.386212 7 log.go:172] (0xc00288edc0) (0xc001116d20) Stream added, broadcasting: 5 I0404 18:12:09.387143 7 log.go:172] (0xc00288edc0) Reply frame received for 5 I0404 18:12:09.469378 7 log.go:172] (0xc00288edc0) Data frame received for 5 I0404 18:12:09.469403 7 log.go:172] (0xc001116d20) (5) Data frame handling I0404 18:12:09.469420 7 log.go:172] (0xc00288edc0) Data frame received for 3 I0404 18:12:09.469430 7 log.go:172] (0xc000434280) (3) Data frame handling I0404 18:12:09.471059 7 log.go:172] (0xc00288edc0) Data frame received for 1 I0404 18:12:09.471081 7 log.go:172] (0xc001116be0) (1) Data frame handling I0404 18:12:09.471103 7 log.go:172] (0xc001116be0) (1) Data frame sent I0404 18:12:09.471119 7 log.go:172] (0xc00288edc0) (0xc001116be0) Stream removed, broadcasting: 1 I0404 18:12:09.471142 7 log.go:172] (0xc00288edc0) Go away received I0404 18:12:09.471189 7 log.go:172] (0xc00288edc0) (0xc001116be0) Stream removed, broadcasting: 1 I0404 18:12:09.471201 7 log.go:172] (0xc00288edc0) (0xc000434280) Stream removed, broadcasting: 3 I0404 18:12:09.471208 7 log.go:172] (0xc00288edc0) (0xc001116d20) Stream removed, broadcasting: 5 STEP: test for file in mounted path Apr 4 18:12:09.474: INFO: ExecWithOptions {Command:[/bin/sh -c test -f /subpath_mount/test.log] Namespace:var-expansion-5445 PodName:var-expansion-bc398538-3d41-4f06-bbcb-1407ab18fa37 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 4 18:12:09.474: INFO: >>> kubeConfig: /root/.kube/config I0404 18:12:09.506458 7 log.go:172] (0xc002670580) (0xc001326dc0) Create stream I0404 18:12:09.506497 7 log.go:172] (0xc002670580) (0xc001326dc0) Stream added, broadcasting: 1 I0404 18:12:09.508260 7 log.go:172] (0xc002670580) Reply frame received for 1 I0404 18:12:09.508308 7 log.go:172] (0xc002670580) (0xc001326e60) Create stream I0404 18:12:09.508321 7 log.go:172] (0xc002670580) (0xc001326e60) Stream added, broadcasting: 3 I0404 18:12:09.509743 7 log.go:172] (0xc002670580) Reply frame received for 3 I0404 18:12:09.509787 7 log.go:172] (0xc002670580) (0xc000434960) Create stream I0404 18:12:09.509805 7 log.go:172] (0xc002670580) (0xc000434960) Stream added, broadcasting: 5 I0404 18:12:09.510791 7 log.go:172] (0xc002670580) Reply frame received for 5 I0404 18:12:09.572860 7 log.go:172] (0xc002670580) Data frame received for 5 I0404 18:12:09.572892 7 log.go:172] (0xc000434960) (5) Data frame handling I0404 18:12:09.572911 7 log.go:172] (0xc002670580) Data frame received for 3 I0404 18:12:09.572917 7 log.go:172] (0xc001326e60) (3) Data frame handling I0404 18:12:09.574643 7 log.go:172] (0xc002670580) Data frame received for 1 I0404 18:12:09.574662 7 log.go:172] (0xc001326dc0) (1) Data frame handling I0404 18:12:09.574680 7 log.go:172] (0xc001326dc0) (1) Data frame sent I0404 18:12:09.574691 7 log.go:172] (0xc002670580) (0xc001326dc0) Stream removed, broadcasting: 1 I0404 18:12:09.574764 7 log.go:172] (0xc002670580) Go away received I0404 18:12:09.574821 7 log.go:172] (0xc002670580) (0xc001326dc0) Stream removed, broadcasting: 1 I0404 18:12:09.574864 7 log.go:172] (0xc002670580) (0xc001326e60) Stream removed, broadcasting: 3 I0404 18:12:09.574885 7 log.go:172] (0xc002670580) (0xc000434960) Stream removed, broadcasting: 5 STEP: updating the annotation value Apr 4 18:12:10.083: INFO: Successfully updated pod "var-expansion-bc398538-3d41-4f06-bbcb-1407ab18fa37" STEP: waiting for annotated pod running STEP: deleting the pod gracefully Apr 4 18:12:10.102: INFO: Deleting pod "var-expansion-bc398538-3d41-4f06-bbcb-1407ab18fa37" in namespace "var-expansion-5445" Apr 4 18:12:10.106: INFO: Wait up to 5m0s for pod "var-expansion-bc398538-3d41-4f06-bbcb-1407ab18fa37" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:12:44.117: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-5445" for this suite. • [SLOW TEST:38.886 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should succeed in writing subpaths in container [sig-storage][Slow] [Conformance]","total":281,"completed":151,"skipped":2555,"failed":0} SSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:12:44.126: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:75 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Apr 4 18:12:44.243: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Apr 4 18:12:44.276: INFO: Pod name sample-pod: Found 0 pods out of 1 Apr 4 18:12:49.279: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Apr 4 18:12:49.280: INFO: Creating deployment "test-rolling-update-deployment" Apr 4 18:12:49.283: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Apr 4 18:12:49.291: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Apr 4 18:12:51.298: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Apr 4 18:12:51.301: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721620769, loc:(*time.Location)(0x7bcb460)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721620769, loc:(*time.Location)(0x7bcb460)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721620769, loc:(*time.Location)(0x7bcb460)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721620769, loc:(*time.Location)(0x7bcb460)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-664dd8fc7f\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 4 18:12:53.305: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 Apr 4 18:12:53.313: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-2833 /apis/apps/v1/namespaces/deployment-2833/deployments/test-rolling-update-deployment 7ea7b263-5bfe-4063-b58b-e4b52dc5cc84 5402448 1 2020-04-04 18:12:49 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004015bf8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-04-04 18:12:49 +0000 UTC,LastTransitionTime:2020-04-04 18:12:49 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-664dd8fc7f" has successfully progressed.,LastUpdateTime:2020-04-04 18:12:52 +0000 UTC,LastTransitionTime:2020-04-04 18:12:49 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Apr 4 18:12:53.316: INFO: New ReplicaSet "test-rolling-update-deployment-664dd8fc7f" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-664dd8fc7f deployment-2833 /apis/apps/v1/namespaces/deployment-2833/replicasets/test-rolling-update-deployment-664dd8fc7f 23e88f8d-928b-405b-9bd2-aef34016d461 5402437 1 2020-04-04 18:12:49 +0000 UTC map[name:sample-pod pod-template-hash:664dd8fc7f] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment 7ea7b263-5bfe-4063-b58b-e4b52dc5cc84 0xc003f52407 0xc003f52408}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 664dd8fc7f,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:664dd8fc7f] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003f524f8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Apr 4 18:12:53.316: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Apr 4 18:12:53.316: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-2833 /apis/apps/v1/namespaces/deployment-2833/replicasets/test-rolling-update-controller ba65a6fd-b43a-4b24-be6f-35357c612ce1 5402446 2 2020-04-04 18:12:44 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment 7ea7b263-5bfe-4063-b58b-e4b52dc5cc84 0xc003f522c7 0xc003f522c8}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc003f52358 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 4 18:12:53.319: INFO: Pod "test-rolling-update-deployment-664dd8fc7f-pqzrl" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-664dd8fc7f-pqzrl test-rolling-update-deployment-664dd8fc7f- deployment-2833 /api/v1/namespaces/deployment-2833/pods/test-rolling-update-deployment-664dd8fc7f-pqzrl 80cd4df3-0f4d-4776-a82b-584d6d30487c 5402436 0 2020-04-04 18:12:49 +0000 UTC map[name:sample-pod pod-template-hash:664dd8fc7f] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-664dd8fc7f 23e88f8d-928b-405b-9bd2-aef34016d461 0xc003fa7217 0xc003fa7218}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xsgft,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xsgft,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xsgft,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 18:12:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 18:12:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 18:12:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 18:12:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.1.29,StartTime:2020-04-04 18:12:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-04 18:12:51 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://2a96cf9db43ba2e5e92dadcae981d9df3e6a9ab991a56e9a644317a1db2bf217,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.29,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:12:53.319: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-2833" for this suite. • [SLOW TEST:9.201 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":281,"completed":152,"skipped":2560,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:12:53.328: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-48088112-c725-4fc2-b32e-69ebb24ea65e STEP: Creating a pod to test consume secrets Apr 4 18:12:53.401: INFO: Waiting up to 5m0s for pod "pod-secrets-1795d853-467a-4437-9c95-1aa1e0ea0af6" in namespace "secrets-1789" to be "Succeeded or Failed" Apr 4 18:12:53.406: INFO: Pod "pod-secrets-1795d853-467a-4437-9c95-1aa1e0ea0af6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.528775ms Apr 4 18:12:55.410: INFO: Pod "pod-secrets-1795d853-467a-4437-9c95-1aa1e0ea0af6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008912859s Apr 4 18:12:57.414: INFO: Pod "pod-secrets-1795d853-467a-4437-9c95-1aa1e0ea0af6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012342991s STEP: Saw pod success Apr 4 18:12:57.414: INFO: Pod "pod-secrets-1795d853-467a-4437-9c95-1aa1e0ea0af6" satisfied condition "Succeeded or Failed" Apr 4 18:12:57.416: INFO: Trying to get logs from node latest-worker2 pod pod-secrets-1795d853-467a-4437-9c95-1aa1e0ea0af6 container secret-volume-test: STEP: delete the pod Apr 4 18:12:57.480: INFO: Waiting for pod pod-secrets-1795d853-467a-4437-9c95-1aa1e0ea0af6 to disappear Apr 4 18:12:57.521: INFO: Pod pod-secrets-1795d853-467a-4437-9c95-1aa1e0ea0af6 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:12:57.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1789" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":281,"completed":153,"skipped":2603,"failed":0} S ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:12:57.529: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod liveness-4cd4a100-0170-4dc8-9014-a62901d65c6c in namespace container-probe-1942 Apr 4 18:13:01.616: INFO: Started pod liveness-4cd4a100-0170-4dc8-9014-a62901d65c6c in namespace container-probe-1942 STEP: checking the pod's current state and verifying that restartCount is present Apr 4 18:13:01.619: INFO: Initial restart count of pod liveness-4cd4a100-0170-4dc8-9014-a62901d65c6c is 0 Apr 4 18:13:27.675: INFO: Restart count of pod container-probe-1942/liveness-4cd4a100-0170-4dc8-9014-a62901d65c6c is now 1 (26.055854408s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:13:27.689: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1942" for this suite. • [SLOW TEST:30.171 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":281,"completed":154,"skipped":2604,"failed":0} SSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:13:27.701: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:52 [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:13:32.820: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-2741" for this suite. • [SLOW TEST:5.125 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":281,"completed":155,"skipped":2608,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:13:32.826: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test override arguments Apr 4 18:13:32.919: INFO: Waiting up to 5m0s for pod "client-containers-4adf571a-f0e6-440c-9698-0383bb886193" in namespace "containers-5671" to be "Succeeded or Failed" Apr 4 18:13:33.008: INFO: Pod "client-containers-4adf571a-f0e6-440c-9698-0383bb886193": Phase="Pending", Reason="", readiness=false. Elapsed: 89.335672ms Apr 4 18:13:35.013: INFO: Pod "client-containers-4adf571a-f0e6-440c-9698-0383bb886193": Phase="Pending", Reason="", readiness=false. Elapsed: 2.09375613s Apr 4 18:13:37.017: INFO: Pod "client-containers-4adf571a-f0e6-440c-9698-0383bb886193": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.0977967s STEP: Saw pod success Apr 4 18:13:37.017: INFO: Pod "client-containers-4adf571a-f0e6-440c-9698-0383bb886193" satisfied condition "Succeeded or Failed" Apr 4 18:13:37.020: INFO: Trying to get logs from node latest-worker2 pod client-containers-4adf571a-f0e6-440c-9698-0383bb886193 container test-container: STEP: delete the pod Apr 4 18:13:37.036: INFO: Waiting for pod client-containers-4adf571a-f0e6-440c-9698-0383bb886193 to disappear Apr 4 18:13:37.057: INFO: Pod client-containers-4adf571a-f0e6-440c-9698-0383bb886193 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:13:37.057: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-5671" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":281,"completed":156,"skipped":2617,"failed":0} SSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:13:37.064: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-1942 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-1942 STEP: Creating statefulset with conflicting port in namespace statefulset-1942 STEP: Waiting until pod test-pod will start running in namespace statefulset-1942 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-1942 Apr 4 18:13:43.177: INFO: Observed stateful pod in namespace: statefulset-1942, name: ss-0, uid: ff85bea4-7870-411a-88ef-3ab6256d2439, status phase: Pending. Waiting for statefulset controller to delete. Apr 4 18:13:43.569: INFO: Observed stateful pod in namespace: statefulset-1942, name: ss-0, uid: ff85bea4-7870-411a-88ef-3ab6256d2439, status phase: Failed. Waiting for statefulset controller to delete. Apr 4 18:13:43.577: INFO: Observed stateful pod in namespace: statefulset-1942, name: ss-0, uid: ff85bea4-7870-411a-88ef-3ab6256d2439, status phase: Failed. Waiting for statefulset controller to delete. Apr 4 18:13:43.593: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-1942 STEP: Removing pod with conflicting port in namespace statefulset-1942 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-1942 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Apr 4 18:13:47.681: INFO: Deleting all statefulset in ns statefulset-1942 Apr 4 18:13:47.685: INFO: Scaling statefulset ss to 0 Apr 4 18:13:57.717: INFO: Waiting for statefulset status.replicas updated to 0 Apr 4 18:13:57.720: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:13:57.735: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-1942" for this suite. • [SLOW TEST:20.678 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":281,"completed":157,"skipped":2623,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:13:57.743: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:180 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Apr 4 18:14:02.364: INFO: Successfully updated pod "pod-update-activedeadlineseconds-4c0d1790-785c-4d4c-949c-b38c999de78e" Apr 4 18:14:02.364: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-4c0d1790-785c-4d4c-949c-b38c999de78e" in namespace "pods-9317" to be "terminated due to deadline exceeded" Apr 4 18:14:02.384: INFO: Pod "pod-update-activedeadlineseconds-4c0d1790-785c-4d4c-949c-b38c999de78e": Phase="Running", Reason="", readiness=true. Elapsed: 20.420407ms Apr 4 18:14:04.388: INFO: Pod "pod-update-activedeadlineseconds-4c0d1790-785c-4d4c-949c-b38c999de78e": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.02485021s Apr 4 18:14:04.389: INFO: Pod "pod-update-activedeadlineseconds-4c0d1790-785c-4d4c-949c-b38c999de78e" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:14:04.389: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9317" for this suite. • [SLOW TEST:6.654 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":281,"completed":158,"skipped":2647,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:14:04.399: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-8869 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a new StatefulSet Apr 4 18:14:04.491: INFO: Found 0 stateful pods, waiting for 3 Apr 4 18:14:14.496: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 4 18:14:14.496: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 4 18:14:14.496: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Apr 4 18:14:24.496: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 4 18:14:24.496: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 4 18:14:24.496: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Apr 4 18:14:24.522: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Apr 4 18:14:34.593: INFO: Updating stateful set ss2 Apr 4 18:14:34.643: INFO: Waiting for Pod statefulset-8869/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Restoring Pods to the correct revision when they are deleted Apr 4 18:14:45.170: INFO: Found 2 stateful pods, waiting for 3 Apr 4 18:14:55.175: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 4 18:14:55.175: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 4 18:14:55.176: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Apr 4 18:14:55.201: INFO: Updating stateful set ss2 Apr 4 18:14:55.227: INFO: Waiting for Pod statefulset-8869/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Apr 4 18:15:05.254: INFO: Updating stateful set ss2 Apr 4 18:15:05.311: INFO: Waiting for StatefulSet statefulset-8869/ss2 to complete update Apr 4 18:15:05.311: INFO: Waiting for Pod statefulset-8869/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Apr 4 18:15:15.319: INFO: Waiting for StatefulSet statefulset-8869/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Apr 4 18:15:25.319: INFO: Deleting all statefulset in ns statefulset-8869 Apr 4 18:15:25.322: INFO: Scaling statefulset ss2 to 0 Apr 4 18:15:35.355: INFO: Waiting for statefulset status.replicas updated to 0 Apr 4 18:15:35.358: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:15:35.372: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-8869" for this suite. • [SLOW TEST:90.981 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":281,"completed":159,"skipped":2700,"failed":0} SS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:15:35.380: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap configmap-8136/configmap-test-1bf659f9-1e0b-4b55-a446-ad1f13bc2cea STEP: Creating a pod to test consume configMaps Apr 4 18:15:35.479: INFO: Waiting up to 5m0s for pod "pod-configmaps-807a8189-2910-463c-ab14-5748dcaade04" in namespace "configmap-8136" to be "Succeeded or Failed" Apr 4 18:15:35.483: INFO: Pod "pod-configmaps-807a8189-2910-463c-ab14-5748dcaade04": Phase="Pending", Reason="", readiness=false. Elapsed: 4.111017ms Apr 4 18:15:37.487: INFO: Pod "pod-configmaps-807a8189-2910-463c-ab14-5748dcaade04": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007397173s Apr 4 18:15:39.490: INFO: Pod "pod-configmaps-807a8189-2910-463c-ab14-5748dcaade04": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011150963s STEP: Saw pod success Apr 4 18:15:39.490: INFO: Pod "pod-configmaps-807a8189-2910-463c-ab14-5748dcaade04" satisfied condition "Succeeded or Failed" Apr 4 18:15:39.493: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-807a8189-2910-463c-ab14-5748dcaade04 container env-test: STEP: delete the pod Apr 4 18:15:39.527: INFO: Waiting for pod pod-configmaps-807a8189-2910-463c-ab14-5748dcaade04 to disappear Apr 4 18:15:39.547: INFO: Pod pod-configmaps-807a8189-2910-463c-ab14-5748dcaade04 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:15:39.547: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8136" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":281,"completed":160,"skipped":2702,"failed":0} SSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:15:39.578: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:161 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:15:39.643: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4877" for this suite. •{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":281,"completed":161,"skipped":2705,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:15:39.657: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:249 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:301 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a replication controller Apr 4 18:15:39.748: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1882' Apr 4 18:15:45.508: INFO: stderr: "" Apr 4 18:15:45.508: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 4 18:15:45.508: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1882' Apr 4 18:15:45.681: INFO: stderr: "" Apr 4 18:15:45.681: INFO: stdout: "update-demo-nautilus-mm9gk update-demo-nautilus-vj4n6 " Apr 4 18:15:45.681: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mm9gk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1882' Apr 4 18:15:45.787: INFO: stderr: "" Apr 4 18:15:45.787: INFO: stdout: "" Apr 4 18:15:45.787: INFO: update-demo-nautilus-mm9gk is created but not running Apr 4 18:15:50.787: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1882' Apr 4 18:15:50.873: INFO: stderr: "" Apr 4 18:15:50.873: INFO: stdout: "update-demo-nautilus-mm9gk update-demo-nautilus-vj4n6 " Apr 4 18:15:50.873: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mm9gk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1882' Apr 4 18:15:50.965: INFO: stderr: "" Apr 4 18:15:50.965: INFO: stdout: "true" Apr 4 18:15:50.966: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mm9gk -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1882' Apr 4 18:15:51.070: INFO: stderr: "" Apr 4 18:15:51.070: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 4 18:15:51.070: INFO: validating pod update-demo-nautilus-mm9gk Apr 4 18:15:51.074: INFO: got data: { "image": "nautilus.jpg" } Apr 4 18:15:51.074: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 4 18:15:51.074: INFO: update-demo-nautilus-mm9gk is verified up and running Apr 4 18:15:51.074: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vj4n6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1882' Apr 4 18:15:51.177: INFO: stderr: "" Apr 4 18:15:51.177: INFO: stdout: "true" Apr 4 18:15:51.177: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vj4n6 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1882' Apr 4 18:15:51.283: INFO: stderr: "" Apr 4 18:15:51.283: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 4 18:15:51.283: INFO: validating pod update-demo-nautilus-vj4n6 Apr 4 18:15:51.287: INFO: got data: { "image": "nautilus.jpg" } Apr 4 18:15:51.287: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 4 18:15:51.287: INFO: update-demo-nautilus-vj4n6 is verified up and running STEP: scaling down the replication controller Apr 4 18:15:51.290: INFO: scanned /root for discovery docs: Apr 4 18:15:51.290: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-1882' Apr 4 18:15:52.417: INFO: stderr: "" Apr 4 18:15:52.417: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 4 18:15:52.417: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1882' Apr 4 18:15:52.518: INFO: stderr: "" Apr 4 18:15:52.518: INFO: stdout: "update-demo-nautilus-mm9gk update-demo-nautilus-vj4n6 " STEP: Replicas for name=update-demo: expected=1 actual=2 Apr 4 18:15:57.518: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1882' Apr 4 18:15:57.616: INFO: stderr: "" Apr 4 18:15:57.616: INFO: stdout: "update-demo-nautilus-mm9gk update-demo-nautilus-vj4n6 " STEP: Replicas for name=update-demo: expected=1 actual=2 Apr 4 18:16:02.617: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1882' Apr 4 18:16:02.712: INFO: stderr: "" Apr 4 18:16:02.712: INFO: stdout: "update-demo-nautilus-mm9gk update-demo-nautilus-vj4n6 " STEP: Replicas for name=update-demo: expected=1 actual=2 Apr 4 18:16:07.713: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1882' Apr 4 18:16:07.812: INFO: stderr: "" Apr 4 18:16:07.812: INFO: stdout: "update-demo-nautilus-mm9gk " Apr 4 18:16:07.813: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mm9gk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1882' Apr 4 18:16:07.909: INFO: stderr: "" Apr 4 18:16:07.909: INFO: stdout: "true" Apr 4 18:16:07.909: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mm9gk -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1882' Apr 4 18:16:08.002: INFO: stderr: "" Apr 4 18:16:08.002: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 4 18:16:08.002: INFO: validating pod update-demo-nautilus-mm9gk Apr 4 18:16:08.005: INFO: got data: { "image": "nautilus.jpg" } Apr 4 18:16:08.005: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 4 18:16:08.005: INFO: update-demo-nautilus-mm9gk is verified up and running STEP: scaling up the replication controller Apr 4 18:16:08.008: INFO: scanned /root for discovery docs: Apr 4 18:16:08.008: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-1882' Apr 4 18:16:09.130: INFO: stderr: "" Apr 4 18:16:09.130: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 4 18:16:09.130: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1882' Apr 4 18:16:09.225: INFO: stderr: "" Apr 4 18:16:09.225: INFO: stdout: "update-demo-nautilus-mm9gk update-demo-nautilus-x7bk8 " Apr 4 18:16:09.225: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mm9gk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1882' Apr 4 18:16:09.317: INFO: stderr: "" Apr 4 18:16:09.317: INFO: stdout: "true" Apr 4 18:16:09.317: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mm9gk -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1882' Apr 4 18:16:09.463: INFO: stderr: "" Apr 4 18:16:09.463: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 4 18:16:09.463: INFO: validating pod update-demo-nautilus-mm9gk Apr 4 18:16:09.468: INFO: got data: { "image": "nautilus.jpg" } Apr 4 18:16:09.468: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 4 18:16:09.468: INFO: update-demo-nautilus-mm9gk is verified up and running Apr 4 18:16:09.468: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-x7bk8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1882' Apr 4 18:16:09.562: INFO: stderr: "" Apr 4 18:16:09.562: INFO: stdout: "" Apr 4 18:16:09.562: INFO: update-demo-nautilus-x7bk8 is created but not running Apr 4 18:16:14.562: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1882' Apr 4 18:16:14.672: INFO: stderr: "" Apr 4 18:16:14.672: INFO: stdout: "update-demo-nautilus-mm9gk update-demo-nautilus-x7bk8 " Apr 4 18:16:14.672: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mm9gk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1882' Apr 4 18:16:14.752: INFO: stderr: "" Apr 4 18:16:14.752: INFO: stdout: "true" Apr 4 18:16:14.752: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mm9gk -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1882' Apr 4 18:16:14.838: INFO: stderr: "" Apr 4 18:16:14.838: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 4 18:16:14.838: INFO: validating pod update-demo-nautilus-mm9gk Apr 4 18:16:14.841: INFO: got data: { "image": "nautilus.jpg" } Apr 4 18:16:14.841: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 4 18:16:14.841: INFO: update-demo-nautilus-mm9gk is verified up and running Apr 4 18:16:14.842: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-x7bk8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1882' Apr 4 18:16:14.942: INFO: stderr: "" Apr 4 18:16:14.942: INFO: stdout: "true" Apr 4 18:16:14.942: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-x7bk8 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1882' Apr 4 18:16:15.034: INFO: stderr: "" Apr 4 18:16:15.034: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 4 18:16:15.034: INFO: validating pod update-demo-nautilus-x7bk8 Apr 4 18:16:15.038: INFO: got data: { "image": "nautilus.jpg" } Apr 4 18:16:15.038: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 4 18:16:15.038: INFO: update-demo-nautilus-x7bk8 is verified up and running STEP: using delete to clean up resources Apr 4 18:16:15.038: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1882' Apr 4 18:16:15.139: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 4 18:16:15.139: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Apr 4 18:16:15.139: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-1882' Apr 4 18:16:15.235: INFO: stderr: "No resources found in kubectl-1882 namespace.\n" Apr 4 18:16:15.235: INFO: stdout: "" Apr 4 18:16:15.235: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-1882 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 4 18:16:15.339: INFO: stderr: "" Apr 4 18:16:15.340: INFO: stdout: "update-demo-nautilus-mm9gk\nupdate-demo-nautilus-x7bk8\n" Apr 4 18:16:15.840: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-1882' Apr 4 18:16:15.936: INFO: stderr: "No resources found in kubectl-1882 namespace.\n" Apr 4 18:16:15.936: INFO: stdout: "" Apr 4 18:16:15.936: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-1882 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 4 18:16:16.026: INFO: stderr: "" Apr 4 18:16:16.026: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:16:16.026: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1882" for this suite. • [SLOW TEST:36.376 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:299 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":281,"completed":162,"skipped":2718,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:16:16.034: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 4 18:16:19.748: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:16:19.963: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-3021" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":281,"completed":163,"skipped":2733,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:16:19.972: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 4 18:16:20.347: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 4 18:16:22.356: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721620980, loc:(*time.Location)(0x7bcb460)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721620980, loc:(*time.Location)(0x7bcb460)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721620980, loc:(*time.Location)(0x7bcb460)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721620980, loc:(*time.Location)(0x7bcb460)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 4 18:16:25.396: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook Apr 4 18:16:25.418: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:16:25.433: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2098" for this suite. STEP: Destroying namespace "webhook-2098-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.544 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":281,"completed":164,"skipped":2752,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:16:25.517: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on tmpfs Apr 4 18:16:25.575: INFO: Waiting up to 5m0s for pod "pod-d25dc470-0ef5-40dc-96d8-9ce87cdf0d0a" in namespace "emptydir-1586" to be "Succeeded or Failed" Apr 4 18:16:25.595: INFO: Pod "pod-d25dc470-0ef5-40dc-96d8-9ce87cdf0d0a": Phase="Pending", Reason="", readiness=false. Elapsed: 19.959004ms Apr 4 18:16:27.699: INFO: Pod "pod-d25dc470-0ef5-40dc-96d8-9ce87cdf0d0a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.12364456s Apr 4 18:16:29.703: INFO: Pod "pod-d25dc470-0ef5-40dc-96d8-9ce87cdf0d0a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.127871849s STEP: Saw pod success Apr 4 18:16:29.703: INFO: Pod "pod-d25dc470-0ef5-40dc-96d8-9ce87cdf0d0a" satisfied condition "Succeeded or Failed" Apr 4 18:16:29.706: INFO: Trying to get logs from node latest-worker2 pod pod-d25dc470-0ef5-40dc-96d8-9ce87cdf0d0a container test-container: STEP: delete the pod Apr 4 18:16:29.738: INFO: Waiting for pod pod-d25dc470-0ef5-40dc-96d8-9ce87cdf0d0a to disappear Apr 4 18:16:29.752: INFO: Pod pod-d25dc470-0ef5-40dc-96d8-9ce87cdf0d0a no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:16:29.753: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1586" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":281,"completed":165,"skipped":2801,"failed":0} SSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:16:29.760: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:16:34.051: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-6243" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":281,"completed":166,"skipped":2804,"failed":0} SS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:16:34.061: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91 Apr 4 18:16:34.226: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 4 18:16:34.244: INFO: Waiting for terminating namespaces to be deleted... Apr 4 18:16:34.246: INFO: Logging pods the kubelet thinks is on node latest-worker before test Apr 4 18:16:34.264: INFO: kindnet-vnjgh from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 4 18:16:34.264: INFO: Container kindnet-cni ready: true, restart count 0 Apr 4 18:16:34.264: INFO: kube-proxy-s9v6p from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 4 18:16:34.264: INFO: Container kube-proxy ready: true, restart count 0 Apr 4 18:16:34.264: INFO: bin-false0006c90e-b760-4097-b7a8-2cc13a595020 from kubelet-test-6243 started at 2020-04-04 18:16:30 +0000 UTC (1 container statuses recorded) Apr 4 18:16:34.264: INFO: Container bin-false0006c90e-b760-4097-b7a8-2cc13a595020 ready: false, restart count 0 Apr 4 18:16:34.264: INFO: Logging pods the kubelet thinks is on node latest-worker2 before test Apr 4 18:16:34.269: INFO: kube-proxy-c5xlk from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 4 18:16:34.269: INFO: Container kube-proxy ready: true, restart count 0 Apr 4 18:16:34.269: INFO: kindnet-zq6gp from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 4 18:16:34.269: INFO: Container kindnet-cni ready: true, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-31746cb4-1768-46b6-ae6d-8e394b382fff 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-31746cb4-1768-46b6-ae6d-8e394b382fff off the node latest-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-31746cb4-1768-46b6-ae6d-8e394b382fff [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:16:52.465: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-9284" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82 • [SLOW TEST:18.412 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":281,"completed":167,"skipped":2806,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:16:52.473: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Apr 4 18:16:52.524: INFO: (0) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 5.147907ms) Apr 4 18:16:52.527: INFO: (1) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 3.14862ms) Apr 4 18:16:52.530: INFO: (2) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 3.227123ms) Apr 4 18:16:52.555: INFO: (3) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 24.488219ms) Apr 4 18:16:52.558: INFO: (4) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 3.235657ms) Apr 4 18:16:52.561: INFO: (5) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 3.296116ms) Apr 4 18:16:52.564: INFO: (6) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 3.267775ms) Apr 4 18:16:52.568: INFO: (7) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 3.204067ms) Apr 4 18:16:52.571: INFO: (8) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 3.410365ms) Apr 4 18:16:52.575: INFO: (9) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 3.461215ms) Apr 4 18:16:52.578: INFO: (10) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 3.508086ms) Apr 4 18:16:52.582: INFO: (11) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 3.289844ms) Apr 4 18:16:52.585: INFO: (12) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 3.199016ms) Apr 4 18:16:52.592: INFO: (13) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 6.73272ms) Apr 4 18:16:52.595: INFO: (14) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 2.986982ms) Apr 4 18:16:52.598: INFO: (15) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 2.887614ms) Apr 4 18:16:52.600: INFO: (16) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 2.360594ms) Apr 4 18:16:52.603: INFO: (17) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 2.599693ms) Apr 4 18:16:52.605: INFO: (18) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 2.774073ms) Apr 4 18:16:52.608: INFO: (19) /api/v1/nodes/latest-worker/proxy/logs/:
containers/
pods/
(200; 2.464707ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:16:52.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-4875" for this suite. •{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance]","total":281,"completed":168,"skipped":2821,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:16:52.616: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation Apr 4 18:16:52.693: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation Apr 4 18:17:03.019: INFO: >>> kubeConfig: /root/.kube/config Apr 4 18:17:06.041: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:17:16.493: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1182" for this suite. • [SLOW TEST:23.886 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":281,"completed":169,"skipped":2840,"failed":0} SSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:17:16.502: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:249 [BeforeEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1448 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: running the image docker.io/library/httpd:2.4.38-alpine Apr 4 18:17:16.551: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-1967' Apr 4 18:17:16.645: INFO: stderr: "" Apr 4 18:17:16.645: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1453 Apr 4 18:17:16.652: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-1967' Apr 4 18:17:22.743: INFO: stderr: "" Apr 4 18:17:22.743: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:17:22.743: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1967" for this suite. • [SLOW TEST:6.248 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1444 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":281,"completed":170,"skipped":2846,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:17:22.751: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 4 18:17:23.443: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 4 18:17:25.454: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721621043, loc:(*time.Location)(0x7bcb460)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721621043, loc:(*time.Location)(0x7bcb460)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721621043, loc:(*time.Location)(0x7bcb460)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721621043, loc:(*time.Location)(0x7bcb460)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 4 18:17:28.480: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Apr 4 18:17:28.483: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-9225-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:17:29.626: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1090" for this suite. STEP: Destroying namespace "webhook-1090-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.947 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":281,"completed":171,"skipped":2890,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:17:29.700: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: set up a multi version CRD Apr 4 18:17:29.771: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:17:44.801: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-3650" for this suite. • [SLOW TEST:15.108 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":281,"completed":172,"skipped":2935,"failed":0} SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:17:44.807: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on tmpfs Apr 4 18:17:44.894: INFO: Waiting up to 5m0s for pod "pod-d8be1974-6d3d-4d09-a410-5833bfe1c743" in namespace "emptydir-5542" to be "Succeeded or Failed" Apr 4 18:17:44.916: INFO: Pod "pod-d8be1974-6d3d-4d09-a410-5833bfe1c743": Phase="Pending", Reason="", readiness=false. Elapsed: 22.189389ms Apr 4 18:17:46.919: INFO: Pod "pod-d8be1974-6d3d-4d09-a410-5833bfe1c743": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025123539s Apr 4 18:17:48.924: INFO: Pod "pod-d8be1974-6d3d-4d09-a410-5833bfe1c743": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030158094s STEP: Saw pod success Apr 4 18:17:48.924: INFO: Pod "pod-d8be1974-6d3d-4d09-a410-5833bfe1c743" satisfied condition "Succeeded or Failed" Apr 4 18:17:48.936: INFO: Trying to get logs from node latest-worker2 pod pod-d8be1974-6d3d-4d09-a410-5833bfe1c743 container test-container: STEP: delete the pod Apr 4 18:17:49.036: INFO: Waiting for pod pod-d8be1974-6d3d-4d09-a410-5833bfe1c743 to disappear Apr 4 18:17:49.063: INFO: Pod pod-d8be1974-6d3d-4d09-a410-5833bfe1c743 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:17:49.063: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5542" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":281,"completed":173,"skipped":2942,"failed":0} SSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:17:49.070: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 4 18:17:50.154: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 4 18:17:52.163: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721621070, loc:(*time.Location)(0x7bcb460)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721621070, loc:(*time.Location)(0x7bcb460)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721621070, loc:(*time.Location)(0x7bcb460)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721621070, loc:(*time.Location)(0x7bcb460)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 4 18:17:55.518: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:17:55.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7556" for this suite. STEP: Destroying namespace "webhook-7556-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.699 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":281,"completed":174,"skipped":2945,"failed":0} S ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:17:55.769: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:18:26.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-2216" for this suite. • [SLOW TEST:30.789 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:42 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":281,"completed":175,"skipped":2946,"failed":0} SSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:18:26.558: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Apr 4 18:18:26.638: INFO: Waiting up to 5m0s for pod "downwardapi-volume-faa6efa6-f2a3-4cd7-83b3-00771e0fc358" in namespace "downward-api-8841" to be "Succeeded or Failed" Apr 4 18:18:26.642: INFO: Pod "downwardapi-volume-faa6efa6-f2a3-4cd7-83b3-00771e0fc358": Phase="Pending", Reason="", readiness=false. Elapsed: 4.213634ms Apr 4 18:18:28.648: INFO: Pod "downwardapi-volume-faa6efa6-f2a3-4cd7-83b3-00771e0fc358": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010301706s Apr 4 18:18:30.651: INFO: Pod "downwardapi-volume-faa6efa6-f2a3-4cd7-83b3-00771e0fc358": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013555512s STEP: Saw pod success Apr 4 18:18:30.651: INFO: Pod "downwardapi-volume-faa6efa6-f2a3-4cd7-83b3-00771e0fc358" satisfied condition "Succeeded or Failed" Apr 4 18:18:30.654: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-faa6efa6-f2a3-4cd7-83b3-00771e0fc358 container client-container: STEP: delete the pod Apr 4 18:18:30.698: INFO: Waiting for pod downwardapi-volume-faa6efa6-f2a3-4cd7-83b3-00771e0fc358 to disappear Apr 4 18:18:30.702: INFO: Pod downwardapi-volume-faa6efa6-f2a3-4cd7-83b3-00771e0fc358 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:18:30.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8841" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":281,"completed":176,"skipped":2951,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:18:30.721: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 4 18:18:31.287: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 4 18:18:33.295: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721621111, loc:(*time.Location)(0x7bcb460)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721621111, loc:(*time.Location)(0x7bcb460)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721621111, loc:(*time.Location)(0x7bcb460)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721621111, loc:(*time.Location)(0x7bcb460)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 4 18:18:36.325: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:18:37.069: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9849" for this suite. STEP: Destroying namespace "webhook-9849-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.437 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":281,"completed":177,"skipped":2972,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:18:37.159: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods Apr 4 18:18:41.745: INFO: Successfully updated pod "adopt-release-fjpt9" STEP: Checking that the Job readopts the Pod Apr 4 18:18:41.745: INFO: Waiting up to 15m0s for pod "adopt-release-fjpt9" in namespace "job-1506" to be "adopted" Apr 4 18:18:41.769: INFO: Pod "adopt-release-fjpt9": Phase="Running", Reason="", readiness=true. Elapsed: 23.891193ms Apr 4 18:18:43.772: INFO: Pod "adopt-release-fjpt9": Phase="Running", Reason="", readiness=true. Elapsed: 2.027021103s Apr 4 18:18:43.772: INFO: Pod "adopt-release-fjpt9" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod Apr 4 18:18:44.290: INFO: Successfully updated pod "adopt-release-fjpt9" STEP: Checking that the Job releases the Pod Apr 4 18:18:44.290: INFO: Waiting up to 15m0s for pod "adopt-release-fjpt9" in namespace "job-1506" to be "released" Apr 4 18:18:44.346: INFO: Pod "adopt-release-fjpt9": Phase="Running", Reason="", readiness=true. Elapsed: 55.803715ms Apr 4 18:18:44.346: INFO: Pod "adopt-release-fjpt9" satisfied condition "released" [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:18:44.346: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-1506" for this suite. • [SLOW TEST:7.208 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":281,"completed":178,"skipped":3003,"failed":0} SSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:18:44.368: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-5879 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 4 18:18:44.416: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Apr 4 18:18:44.478: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 4 18:18:46.482: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 4 18:18:48.483: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 4 18:18:50.482: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 4 18:18:52.481: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 4 18:18:54.482: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 4 18:18:56.482: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 4 18:18:58.482: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 4 18:19:00.482: INFO: The status of Pod netserver-0 is Running (Ready = true) Apr 4 18:19:00.488: INFO: The status of Pod netserver-1 is Running (Ready = false) Apr 4 18:19:02.492: INFO: The status of Pod netserver-1 is Running (Ready = false) Apr 4 18:19:04.492: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Apr 4 18:19:08.564: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.219 8081 | grep -v '^\s*$'] Namespace:pod-network-test-5879 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 4 18:19:08.564: INFO: >>> kubeConfig: /root/.kube/config I0404 18:19:08.599991 7 log.go:172] (0xc00288e420) (0xc00225bae0) Create stream I0404 18:19:08.600026 7 log.go:172] (0xc00288e420) (0xc00225bae0) Stream added, broadcasting: 1 I0404 18:19:08.602644 7 log.go:172] (0xc00288e420) Reply frame received for 1 I0404 18:19:08.602697 7 log.go:172] (0xc00288e420) (0xc000d03360) Create stream I0404 18:19:08.602710 7 log.go:172] (0xc00288e420) (0xc000d03360) Stream added, broadcasting: 3 I0404 18:19:08.603713 7 log.go:172] (0xc00288e420) Reply frame received for 3 I0404 18:19:08.603749 7 log.go:172] (0xc00288e420) (0xc0011165a0) Create stream I0404 18:19:08.603761 7 log.go:172] (0xc00288e420) (0xc0011165a0) Stream added, broadcasting: 5 I0404 18:19:08.604617 7 log.go:172] (0xc00288e420) Reply frame received for 5 I0404 18:19:09.658261 7 log.go:172] (0xc00288e420) Data frame received for 3 I0404 18:19:09.658294 7 log.go:172] (0xc000d03360) (3) Data frame handling I0404 18:19:09.658314 7 log.go:172] (0xc000d03360) (3) Data frame sent I0404 18:19:09.658327 7 log.go:172] (0xc00288e420) Data frame received for 3 I0404 18:19:09.658342 7 log.go:172] (0xc000d03360) (3) Data frame handling I0404 18:19:09.658366 7 log.go:172] (0xc00288e420) Data frame received for 5 I0404 18:19:09.658377 7 log.go:172] (0xc0011165a0) (5) Data frame handling I0404 18:19:09.661057 7 log.go:172] (0xc00288e420) Data frame received for 1 I0404 18:19:09.661100 7 log.go:172] (0xc00225bae0) (1) Data frame handling I0404 18:19:09.661304 7 log.go:172] (0xc00225bae0) (1) Data frame sent I0404 18:19:09.661336 7 log.go:172] (0xc00288e420) (0xc00225bae0) Stream removed, broadcasting: 1 I0404 18:19:09.661362 7 log.go:172] (0xc00288e420) Go away received I0404 18:19:09.661589 7 log.go:172] (0xc00288e420) (0xc00225bae0) Stream removed, broadcasting: 1 I0404 18:19:09.661626 7 log.go:172] (0xc00288e420) (0xc000d03360) Stream removed, broadcasting: 3 I0404 18:19:09.661639 7 log.go:172] (0xc00288e420) (0xc0011165a0) Stream removed, broadcasting: 5 Apr 4 18:19:09.661: INFO: Found all expected endpoints: [netserver-0] Apr 4 18:19:09.664: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.53 8081 | grep -v '^\s*$'] Namespace:pod-network-test-5879 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 4 18:19:09.665: INFO: >>> kubeConfig: /root/.kube/config I0404 18:19:09.699681 7 log.go:172] (0xc0026709a0) (0xc000d03b80) Create stream I0404 18:19:09.699703 7 log.go:172] (0xc0026709a0) (0xc000d03b80) Stream added, broadcasting: 1 I0404 18:19:09.702897 7 log.go:172] (0xc0026709a0) Reply frame received for 1 I0404 18:19:09.702936 7 log.go:172] (0xc0026709a0) (0xc00225bb80) Create stream I0404 18:19:09.702950 7 log.go:172] (0xc0026709a0) (0xc00225bb80) Stream added, broadcasting: 3 I0404 18:19:09.704173 7 log.go:172] (0xc0026709a0) Reply frame received for 3 I0404 18:19:09.704201 7 log.go:172] (0xc0026709a0) (0xc001116780) Create stream I0404 18:19:09.704224 7 log.go:172] (0xc0026709a0) (0xc001116780) Stream added, broadcasting: 5 I0404 18:19:09.705625 7 log.go:172] (0xc0026709a0) Reply frame received for 5 I0404 18:19:10.789619 7 log.go:172] (0xc0026709a0) Data frame received for 3 I0404 18:19:10.789730 7 log.go:172] (0xc00225bb80) (3) Data frame handling I0404 18:19:10.789782 7 log.go:172] (0xc00225bb80) (3) Data frame sent I0404 18:19:10.789884 7 log.go:172] (0xc0026709a0) Data frame received for 3 I0404 18:19:10.789927 7 log.go:172] (0xc00225bb80) (3) Data frame handling I0404 18:19:10.790099 7 log.go:172] (0xc0026709a0) Data frame received for 5 I0404 18:19:10.790120 7 log.go:172] (0xc001116780) (5) Data frame handling I0404 18:19:10.791897 7 log.go:172] (0xc0026709a0) Data frame received for 1 I0404 18:19:10.791930 7 log.go:172] (0xc000d03b80) (1) Data frame handling I0404 18:19:10.791961 7 log.go:172] (0xc000d03b80) (1) Data frame sent I0404 18:19:10.792009 7 log.go:172] (0xc0026709a0) (0xc000d03b80) Stream removed, broadcasting: 1 I0404 18:19:10.792132 7 log.go:172] (0xc0026709a0) (0xc000d03b80) Stream removed, broadcasting: 1 I0404 18:19:10.792226 7 log.go:172] (0xc0026709a0) (0xc00225bb80) Stream removed, broadcasting: 3 I0404 18:19:10.792271 7 log.go:172] (0xc0026709a0) (0xc001116780) Stream removed, broadcasting: 5 I0404 18:19:10.792345 7 log.go:172] (0xc0026709a0) Go away received Apr 4 18:19:10.792: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:19:10.792: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-5879" for this suite. • [SLOW TEST:26.434 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":281,"completed":179,"skipped":3013,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:19:10.803: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3582.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-3582.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3582.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-3582.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 4 18:19:17.159: INFO: DNS probes using dns-test-c39b3924-1743-422f-9b30-db69c7635582 succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3582.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-3582.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3582.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-3582.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 4 18:19:23.287: INFO: File wheezy_udp@dns-test-service-3.dns-3582.svc.cluster.local from pod dns-3582/dns-test-32e5c965-6c22-4ca6-a5be-5ad4b3cda716 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 4 18:19:23.291: INFO: File jessie_udp@dns-test-service-3.dns-3582.svc.cluster.local from pod dns-3582/dns-test-32e5c965-6c22-4ca6-a5be-5ad4b3cda716 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 4 18:19:23.291: INFO: Lookups using dns-3582/dns-test-32e5c965-6c22-4ca6-a5be-5ad4b3cda716 failed for: [wheezy_udp@dns-test-service-3.dns-3582.svc.cluster.local jessie_udp@dns-test-service-3.dns-3582.svc.cluster.local] Apr 4 18:19:28.295: INFO: File wheezy_udp@dns-test-service-3.dns-3582.svc.cluster.local from pod dns-3582/dns-test-32e5c965-6c22-4ca6-a5be-5ad4b3cda716 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 4 18:19:28.299: INFO: File jessie_udp@dns-test-service-3.dns-3582.svc.cluster.local from pod dns-3582/dns-test-32e5c965-6c22-4ca6-a5be-5ad4b3cda716 contains '' instead of 'bar.example.com.' Apr 4 18:19:28.299: INFO: Lookups using dns-3582/dns-test-32e5c965-6c22-4ca6-a5be-5ad4b3cda716 failed for: [wheezy_udp@dns-test-service-3.dns-3582.svc.cluster.local jessie_udp@dns-test-service-3.dns-3582.svc.cluster.local] Apr 4 18:19:33.296: INFO: File wheezy_udp@dns-test-service-3.dns-3582.svc.cluster.local from pod dns-3582/dns-test-32e5c965-6c22-4ca6-a5be-5ad4b3cda716 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 4 18:19:33.299: INFO: File jessie_udp@dns-test-service-3.dns-3582.svc.cluster.local from pod dns-3582/dns-test-32e5c965-6c22-4ca6-a5be-5ad4b3cda716 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 4 18:19:33.299: INFO: Lookups using dns-3582/dns-test-32e5c965-6c22-4ca6-a5be-5ad4b3cda716 failed for: [wheezy_udp@dns-test-service-3.dns-3582.svc.cluster.local jessie_udp@dns-test-service-3.dns-3582.svc.cluster.local] Apr 4 18:19:38.299: INFO: File wheezy_udp@dns-test-service-3.dns-3582.svc.cluster.local from pod dns-3582/dns-test-32e5c965-6c22-4ca6-a5be-5ad4b3cda716 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 4 18:19:38.303: INFO: File jessie_udp@dns-test-service-3.dns-3582.svc.cluster.local from pod dns-3582/dns-test-32e5c965-6c22-4ca6-a5be-5ad4b3cda716 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 4 18:19:38.303: INFO: Lookups using dns-3582/dns-test-32e5c965-6c22-4ca6-a5be-5ad4b3cda716 failed for: [wheezy_udp@dns-test-service-3.dns-3582.svc.cluster.local jessie_udp@dns-test-service-3.dns-3582.svc.cluster.local] Apr 4 18:19:43.296: INFO: File wheezy_udp@dns-test-service-3.dns-3582.svc.cluster.local from pod dns-3582/dns-test-32e5c965-6c22-4ca6-a5be-5ad4b3cda716 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 4 18:19:43.318: INFO: File jessie_udp@dns-test-service-3.dns-3582.svc.cluster.local from pod dns-3582/dns-test-32e5c965-6c22-4ca6-a5be-5ad4b3cda716 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 4 18:19:43.318: INFO: Lookups using dns-3582/dns-test-32e5c965-6c22-4ca6-a5be-5ad4b3cda716 failed for: [wheezy_udp@dns-test-service-3.dns-3582.svc.cluster.local jessie_udp@dns-test-service-3.dns-3582.svc.cluster.local] Apr 4 18:19:48.300: INFO: DNS probes using dns-test-32e5c965-6c22-4ca6-a5be-5ad4b3cda716 succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3582.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-3582.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3582.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-3582.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 4 18:19:55.010: INFO: DNS probes using dns-test-30905c41-1a72-4fe1-b406-c02246b7193d succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:19:55.090: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3582" for this suite. • [SLOW TEST:44.300 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":281,"completed":180,"skipped":3028,"failed":0} SS ------------------------------ [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:19:55.103: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:249 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: starting the proxy server Apr 4 18:19:55.462: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:19:55.545: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4338" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":281,"completed":181,"skipped":3030,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:19:55.580: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:20:06.677: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-963" for this suite. • [SLOW TEST:11.107 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":281,"completed":182,"skipped":3038,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:20:06.687: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-downwardapi-z8mb STEP: Creating a pod to test atomic-volume-subpath Apr 4 18:20:06.755: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-z8mb" in namespace "subpath-3172" to be "Succeeded or Failed" Apr 4 18:20:06.762: INFO: Pod "pod-subpath-test-downwardapi-z8mb": Phase="Pending", Reason="", readiness=false. Elapsed: 7.353292ms Apr 4 18:20:08.765: INFO: Pod "pod-subpath-test-downwardapi-z8mb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010429822s Apr 4 18:20:10.769: INFO: Pod "pod-subpath-test-downwardapi-z8mb": Phase="Running", Reason="", readiness=true. Elapsed: 4.014399487s Apr 4 18:20:12.774: INFO: Pod "pod-subpath-test-downwardapi-z8mb": Phase="Running", Reason="", readiness=true. Elapsed: 6.019092833s Apr 4 18:20:14.778: INFO: Pod "pod-subpath-test-downwardapi-z8mb": Phase="Running", Reason="", readiness=true. Elapsed: 8.023508208s Apr 4 18:20:16.783: INFO: Pod "pod-subpath-test-downwardapi-z8mb": Phase="Running", Reason="", readiness=true. Elapsed: 10.02781144s Apr 4 18:20:18.787: INFO: Pod "pod-subpath-test-downwardapi-z8mb": Phase="Running", Reason="", readiness=true. Elapsed: 12.032032134s Apr 4 18:20:20.791: INFO: Pod "pod-subpath-test-downwardapi-z8mb": Phase="Running", Reason="", readiness=true. Elapsed: 14.036253046s Apr 4 18:20:22.796: INFO: Pod "pod-subpath-test-downwardapi-z8mb": Phase="Running", Reason="", readiness=true. Elapsed: 16.040699359s Apr 4 18:20:24.800: INFO: Pod "pod-subpath-test-downwardapi-z8mb": Phase="Running", Reason="", readiness=true. Elapsed: 18.044893729s Apr 4 18:20:26.804: INFO: Pod "pod-subpath-test-downwardapi-z8mb": Phase="Running", Reason="", readiness=true. Elapsed: 20.049117908s Apr 4 18:20:28.809: INFO: Pod "pod-subpath-test-downwardapi-z8mb": Phase="Running", Reason="", readiness=true. Elapsed: 22.053805509s Apr 4 18:20:30.812: INFO: Pod "pod-subpath-test-downwardapi-z8mb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.057492339s STEP: Saw pod success Apr 4 18:20:30.812: INFO: Pod "pod-subpath-test-downwardapi-z8mb" satisfied condition "Succeeded or Failed" Apr 4 18:20:30.815: INFO: Trying to get logs from node latest-worker pod pod-subpath-test-downwardapi-z8mb container test-container-subpath-downwardapi-z8mb: STEP: delete the pod Apr 4 18:20:30.844: INFO: Waiting for pod pod-subpath-test-downwardapi-z8mb to disappear Apr 4 18:20:30.868: INFO: Pod pod-subpath-test-downwardapi-z8mb no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-z8mb Apr 4 18:20:30.868: INFO: Deleting pod "pod-subpath-test-downwardapi-z8mb" in namespace "subpath-3172" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:20:30.871: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-3172" for this suite. • [SLOW TEST:24.191 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":281,"completed":183,"skipped":3053,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:20:30.878: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 4 18:20:31.670: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 4 18:20:33.678: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721621231, loc:(*time.Location)(0x7bcb460)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721621231, loc:(*time.Location)(0x7bcb460)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721621231, loc:(*time.Location)(0x7bcb460)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721621231, loc:(*time.Location)(0x7bcb460)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 4 18:20:36.706: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:20:37.144: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-812" for this suite. STEP: Destroying namespace "webhook-812-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.373 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":281,"completed":184,"skipped":3071,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:20:37.252: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod Apr 4 18:20:37.286: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:20:45.178: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-9311" for this suite. • [SLOW TEST:7.945 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":281,"completed":185,"skipped":3092,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:20:45.197: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on node default medium Apr 4 18:20:45.331: INFO: Waiting up to 5m0s for pod "pod-c22b7285-4cc9-414e-b4e1-7625cbbe862a" in namespace "emptydir-8850" to be "Succeeded or Failed" Apr 4 18:20:45.348: INFO: Pod "pod-c22b7285-4cc9-414e-b4e1-7625cbbe862a": Phase="Pending", Reason="", readiness=false. Elapsed: 16.994147ms Apr 4 18:20:47.352: INFO: Pod "pod-c22b7285-4cc9-414e-b4e1-7625cbbe862a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021198207s Apr 4 18:20:49.356: INFO: Pod "pod-c22b7285-4cc9-414e-b4e1-7625cbbe862a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025098307s STEP: Saw pod success Apr 4 18:20:49.356: INFO: Pod "pod-c22b7285-4cc9-414e-b4e1-7625cbbe862a" satisfied condition "Succeeded or Failed" Apr 4 18:20:49.359: INFO: Trying to get logs from node latest-worker2 pod pod-c22b7285-4cc9-414e-b4e1-7625cbbe862a container test-container: STEP: delete the pod Apr 4 18:20:49.406: INFO: Waiting for pod pod-c22b7285-4cc9-414e-b4e1-7625cbbe862a to disappear Apr 4 18:20:49.431: INFO: Pod pod-c22b7285-4cc9-414e-b4e1-7625cbbe862a no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:20:49.431: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8850" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":281,"completed":186,"skipped":3105,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:20:49.440: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-9ae200d7-6074-4832-bfe3-39b0038bba27 STEP: Creating a pod to test consume configMaps Apr 4 18:20:49.505: INFO: Waiting up to 5m0s for pod "pod-configmaps-9eed1773-5cea-4f20-844e-e1969847d25a" in namespace "configmap-2084" to be "Succeeded or Failed" Apr 4 18:20:49.509: INFO: Pod "pod-configmaps-9eed1773-5cea-4f20-844e-e1969847d25a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.797753ms Apr 4 18:20:51.629: INFO: Pod "pod-configmaps-9eed1773-5cea-4f20-844e-e1969847d25a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.124348611s Apr 4 18:20:53.633: INFO: Pod "pod-configmaps-9eed1773-5cea-4f20-844e-e1969847d25a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.12810238s STEP: Saw pod success Apr 4 18:20:53.633: INFO: Pod "pod-configmaps-9eed1773-5cea-4f20-844e-e1969847d25a" satisfied condition "Succeeded or Failed" Apr 4 18:20:53.635: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-9eed1773-5cea-4f20-844e-e1969847d25a container configmap-volume-test: STEP: delete the pod Apr 4 18:20:53.653: INFO: Waiting for pod pod-configmaps-9eed1773-5cea-4f20-844e-e1969847d25a to disappear Apr 4 18:20:53.685: INFO: Pod pod-configmaps-9eed1773-5cea-4f20-844e-e1969847d25a no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:20:53.685: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2084" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":281,"completed":187,"skipped":3129,"failed":0} SSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:20:53.694: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-6e74a6e1-5944-45ef-ad28-2aead1175ab7 STEP: Creating a pod to test consume configMaps Apr 4 18:20:53.786: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-41f0e5d7-eb5d-41dd-9a12-573b48d7590e" in namespace "projected-8719" to be "Succeeded or Failed" Apr 4 18:20:53.790: INFO: Pod "pod-projected-configmaps-41f0e5d7-eb5d-41dd-9a12-573b48d7590e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.299893ms Apr 4 18:20:55.802: INFO: Pod "pod-projected-configmaps-41f0e5d7-eb5d-41dd-9a12-573b48d7590e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015842785s Apr 4 18:20:57.821: INFO: Pod "pod-projected-configmaps-41f0e5d7-eb5d-41dd-9a12-573b48d7590e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034456007s STEP: Saw pod success Apr 4 18:20:57.821: INFO: Pod "pod-projected-configmaps-41f0e5d7-eb5d-41dd-9a12-573b48d7590e" satisfied condition "Succeeded or Failed" Apr 4 18:20:57.824: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-41f0e5d7-eb5d-41dd-9a12-573b48d7590e container projected-configmap-volume-test: STEP: delete the pod Apr 4 18:20:57.861: INFO: Waiting for pod pod-projected-configmaps-41f0e5d7-eb5d-41dd-9a12-573b48d7590e to disappear Apr 4 18:20:57.868: INFO: Pod pod-projected-configmaps-41f0e5d7-eb5d-41dd-9a12-573b48d7590e no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:20:57.868: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8719" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":281,"completed":188,"skipped":3136,"failed":0} S ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:20:57.875: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:135 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Apr 4 18:20:57.946: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Apr 4 18:20:57.952: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 18:20:57.969: INFO: Number of nodes with available pods: 0 Apr 4 18:20:57.969: INFO: Node latest-worker is running more than one daemon pod Apr 4 18:20:59.007: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 18:20:59.010: INFO: Number of nodes with available pods: 0 Apr 4 18:20:59.010: INFO: Node latest-worker is running more than one daemon pod Apr 4 18:21:00.061: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 18:21:00.064: INFO: Number of nodes with available pods: 0 Apr 4 18:21:00.064: INFO: Node latest-worker is running more than one daemon pod Apr 4 18:21:00.974: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 18:21:00.977: INFO: Number of nodes with available pods: 0 Apr 4 18:21:00.977: INFO: Node latest-worker is running more than one daemon pod Apr 4 18:21:01.973: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 18:21:01.977: INFO: Number of nodes with available pods: 1 Apr 4 18:21:01.977: INFO: Node latest-worker is running more than one daemon pod Apr 4 18:21:02.973: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 18:21:02.976: INFO: Number of nodes with available pods: 2 Apr 4 18:21:02.976: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Apr 4 18:21:03.007: INFO: Wrong image for pod: daemon-set-2rvsq. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 4 18:21:03.007: INFO: Wrong image for pod: daemon-set-csw58. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 4 18:21:03.037: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 18:21:04.041: INFO: Wrong image for pod: daemon-set-2rvsq. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 4 18:21:04.041: INFO: Wrong image for pod: daemon-set-csw58. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 4 18:21:04.044: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 18:21:05.065: INFO: Wrong image for pod: daemon-set-2rvsq. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 4 18:21:05.065: INFO: Pod daemon-set-2rvsq is not available Apr 4 18:21:05.065: INFO: Wrong image for pod: daemon-set-csw58. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 4 18:21:05.085: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 18:21:06.041: INFO: Wrong image for pod: daemon-set-2rvsq. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 4 18:21:06.041: INFO: Pod daemon-set-2rvsq is not available Apr 4 18:21:06.041: INFO: Wrong image for pod: daemon-set-csw58. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 4 18:21:06.045: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 18:21:07.042: INFO: Wrong image for pod: daemon-set-2rvsq. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 4 18:21:07.042: INFO: Pod daemon-set-2rvsq is not available Apr 4 18:21:07.042: INFO: Wrong image for pod: daemon-set-csw58. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 4 18:21:07.046: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 18:21:08.042: INFO: Wrong image for pod: daemon-set-2rvsq. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 4 18:21:08.042: INFO: Pod daemon-set-2rvsq is not available Apr 4 18:21:08.042: INFO: Wrong image for pod: daemon-set-csw58. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 4 18:21:08.047: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 18:21:09.041: INFO: Wrong image for pod: daemon-set-2rvsq. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 4 18:21:09.041: INFO: Pod daemon-set-2rvsq is not available Apr 4 18:21:09.041: INFO: Wrong image for pod: daemon-set-csw58. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 4 18:21:09.044: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 18:21:10.042: INFO: Wrong image for pod: daemon-set-2rvsq. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 4 18:21:10.042: INFO: Pod daemon-set-2rvsq is not available Apr 4 18:21:10.042: INFO: Wrong image for pod: daemon-set-csw58. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 4 18:21:10.046: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 18:21:11.042: INFO: Wrong image for pod: daemon-set-2rvsq. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 4 18:21:11.042: INFO: Pod daemon-set-2rvsq is not available Apr 4 18:21:11.042: INFO: Wrong image for pod: daemon-set-csw58. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 4 18:21:11.046: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 18:21:12.042: INFO: Wrong image for pod: daemon-set-2rvsq. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 4 18:21:12.042: INFO: Pod daemon-set-2rvsq is not available Apr 4 18:21:12.042: INFO: Wrong image for pod: daemon-set-csw58. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 4 18:21:12.047: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 18:21:13.043: INFO: Wrong image for pod: daemon-set-csw58. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 4 18:21:13.043: INFO: Pod daemon-set-qs5ln is not available Apr 4 18:21:13.047: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 18:21:14.044: INFO: Wrong image for pod: daemon-set-csw58. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 4 18:21:14.044: INFO: Pod daemon-set-qs5ln is not available Apr 4 18:21:14.105: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 18:21:15.042: INFO: Wrong image for pod: daemon-set-csw58. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 4 18:21:15.042: INFO: Pod daemon-set-qs5ln is not available Apr 4 18:21:15.046: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 18:21:16.051: INFO: Wrong image for pod: daemon-set-csw58. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 4 18:21:16.051: INFO: Pod daemon-set-qs5ln is not available Apr 4 18:21:16.055: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 18:21:17.042: INFO: Wrong image for pod: daemon-set-csw58. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 4 18:21:17.047: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 18:21:18.042: INFO: Wrong image for pod: daemon-set-csw58. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 4 18:21:18.046: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 18:21:19.042: INFO: Wrong image for pod: daemon-set-csw58. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 4 18:21:19.042: INFO: Pod daemon-set-csw58 is not available Apr 4 18:21:19.045: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 18:21:20.041: INFO: Wrong image for pod: daemon-set-csw58. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 4 18:21:20.041: INFO: Pod daemon-set-csw58 is not available Apr 4 18:21:20.049: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 18:21:21.041: INFO: Wrong image for pod: daemon-set-csw58. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 4 18:21:21.042: INFO: Pod daemon-set-csw58 is not available Apr 4 18:21:21.046: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 18:21:22.041: INFO: Wrong image for pod: daemon-set-csw58. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 4 18:21:22.041: INFO: Pod daemon-set-csw58 is not available Apr 4 18:21:22.046: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 18:21:23.046: INFO: Wrong image for pod: daemon-set-csw58. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Apr 4 18:21:23.046: INFO: Pod daemon-set-csw58 is not available Apr 4 18:21:23.061: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 18:21:24.041: INFO: Pod daemon-set-6pfj9 is not available Apr 4 18:21:24.044: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Apr 4 18:21:24.047: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 18:21:24.049: INFO: Number of nodes with available pods: 1 Apr 4 18:21:24.049: INFO: Node latest-worker2 is running more than one daemon pod Apr 4 18:21:25.054: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 18:21:25.058: INFO: Number of nodes with available pods: 1 Apr 4 18:21:25.058: INFO: Node latest-worker2 is running more than one daemon pod Apr 4 18:21:26.053: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 18:21:26.058: INFO: Number of nodes with available pods: 1 Apr 4 18:21:26.058: INFO: Node latest-worker2 is running more than one daemon pod Apr 4 18:21:27.053: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 18:21:27.056: INFO: Number of nodes with available pods: 1 Apr 4 18:21:27.056: INFO: Node latest-worker2 is running more than one daemon pod Apr 4 18:21:28.079: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 18:21:28.082: INFO: Number of nodes with available pods: 1 Apr 4 18:21:28.082: INFO: Node latest-worker2 is running more than one daemon pod Apr 4 18:21:29.054: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 18:21:29.057: INFO: Number of nodes with available pods: 1 Apr 4 18:21:29.057: INFO: Node latest-worker2 is running more than one daemon pod Apr 4 18:21:30.053: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 18:21:30.057: INFO: Number of nodes with available pods: 2 Apr 4 18:21:30.057: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:101 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5142, will wait for the garbage collector to delete the pods Apr 4 18:21:30.128: INFO: Deleting DaemonSet.extensions daemon-set took: 5.157838ms Apr 4 18:21:30.428: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.284483ms Apr 4 18:21:33.330: INFO: Number of nodes with available pods: 0 Apr 4 18:21:33.330: INFO: Number of running nodes: 0, number of available pods: 0 Apr 4 18:21:33.332: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5142/daemonsets","resourceVersion":"5405894"},"items":null} Apr 4 18:21:33.334: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5142/pods","resourceVersion":"5405894"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:21:33.341: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-5142" for this suite. • [SLOW TEST:35.492 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":281,"completed":189,"skipped":3137,"failed":0} SSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:21:33.367: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-2324 [It] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating statefulset ss in namespace statefulset-2324 Apr 4 18:21:33.437: INFO: Found 0 stateful pods, waiting for 1 Apr 4 18:21:43.442: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Apr 4 18:21:43.462: INFO: Deleting all statefulset in ns statefulset-2324 Apr 4 18:21:43.468: INFO: Scaling statefulset ss to 0 Apr 4 18:22:13.525: INFO: Waiting for statefulset status.replicas updated to 0 Apr 4 18:22:13.529: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:22:13.542: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-2324" for this suite. • [SLOW TEST:40.201 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":281,"completed":190,"skipped":3144,"failed":0} SSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:22:13.569: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Apr 4 18:22:13.644: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-232 /api/v1/namespaces/watch-232/configmaps/e2e-watch-test-label-changed 4d0fbdd1-039a-44dd-b30f-d7a2bab9267b 5406105 0 2020-04-04 18:22:13 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Apr 4 18:22:13.645: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-232 /api/v1/namespaces/watch-232/configmaps/e2e-watch-test-label-changed 4d0fbdd1-039a-44dd-b30f-d7a2bab9267b 5406106 0 2020-04-04 18:22:13 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Apr 4 18:22:13.645: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-232 /api/v1/namespaces/watch-232/configmaps/e2e-watch-test-label-changed 4d0fbdd1-039a-44dd-b30f-d7a2bab9267b 5406107 0 2020-04-04 18:22:13 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Apr 4 18:22:23.703: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-232 /api/v1/namespaces/watch-232/configmaps/e2e-watch-test-label-changed 4d0fbdd1-039a-44dd-b30f-d7a2bab9267b 5406161 0 2020-04-04 18:22:13 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Apr 4 18:22:23.703: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-232 /api/v1/namespaces/watch-232/configmaps/e2e-watch-test-label-changed 4d0fbdd1-039a-44dd-b30f-d7a2bab9267b 5406162 0 2020-04-04 18:22:13 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} Apr 4 18:22:23.703: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-232 /api/v1/namespaces/watch-232/configmaps/e2e-watch-test-label-changed 4d0fbdd1-039a-44dd-b30f-d7a2bab9267b 5406163 0 2020-04-04 18:22:13 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:22:23.703: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-232" for this suite. • [SLOW TEST:10.143 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":281,"completed":191,"skipped":3147,"failed":0} SSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:22:23.712: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Apr 4 18:22:24.368: INFO: Pod name wrapped-volume-race-d6cb249f-3246-4449-8822-6ff04df2d210: Found 0 pods out of 5 Apr 4 18:22:29.374: INFO: Pod name wrapped-volume-race-d6cb249f-3246-4449-8822-6ff04df2d210: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-d6cb249f-3246-4449-8822-6ff04df2d210 in namespace emptydir-wrapper-2774, will wait for the garbage collector to delete the pods Apr 4 18:22:55.640: INFO: Deleting ReplicationController wrapped-volume-race-d6cb249f-3246-4449-8822-6ff04df2d210 took: 5.679993ms Apr 4 18:22:55.940: INFO: Terminating ReplicationController wrapped-volume-race-d6cb249f-3246-4449-8822-6ff04df2d210 pods took: 300.210179ms STEP: Creating RC which spawns configmap-volume pods Apr 4 18:23:23.877: INFO: Pod name wrapped-volume-race-4be5c401-3bde-4892-bbba-8e68b96ce648: Found 0 pods out of 5 Apr 4 18:23:29.384: INFO: Pod name wrapped-volume-race-4be5c401-3bde-4892-bbba-8e68b96ce648: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-4be5c401-3bde-4892-bbba-8e68b96ce648 in namespace emptydir-wrapper-2774, will wait for the garbage collector to delete the pods Apr 4 18:24:03.185: INFO: Deleting ReplicationController wrapped-volume-race-4be5c401-3bde-4892-bbba-8e68b96ce648 took: 26.066037ms Apr 4 18:24:04.485: INFO: Terminating ReplicationController wrapped-volume-race-4be5c401-3bde-4892-bbba-8e68b96ce648 pods took: 1.300179142s STEP: Creating RC which spawns configmap-volume pods Apr 4 18:24:34.716: INFO: Pod name wrapped-volume-race-b07fc9da-4de4-4fab-9c05-ca1e30cf7e67: Found 0 pods out of 5 Apr 4 18:24:39.767: INFO: Pod name wrapped-volume-race-b07fc9da-4de4-4fab-9c05-ca1e30cf7e67: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-b07fc9da-4de4-4fab-9c05-ca1e30cf7e67 in namespace emptydir-wrapper-2774, will wait for the garbage collector to delete the pods Apr 4 18:25:24.198: INFO: Deleting ReplicationController wrapped-volume-race-b07fc9da-4de4-4fab-9c05-ca1e30cf7e67 took: 33.223383ms Apr 4 18:25:24.498: INFO: Terminating ReplicationController wrapped-volume-race-b07fc9da-4de4-4fab-9c05-ca1e30cf7e67 pods took: 300.262268ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:25:33.697: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-2774" for this suite. • [SLOW TEST:190.012 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":281,"completed":192,"skipped":3150,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:25:33.725: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Apr 4 18:26:09.836: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6000 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 4 18:26:09.836: INFO: >>> kubeConfig: /root/.kube/config I0404 18:26:09.861080 7 log.go:172] (0xc002992160) (0xc0016a6be0) Create stream I0404 18:26:09.861107 7 log.go:172] (0xc002992160) (0xc0016a6be0) Stream added, broadcasting: 1 I0404 18:26:09.863030 7 log.go:172] (0xc002992160) Reply frame received for 1 I0404 18:26:09.863051 7 log.go:172] (0xc002992160) (0xc0016a6c80) Create stream I0404 18:26:09.863058 7 log.go:172] (0xc002992160) (0xc0016a6c80) Stream added, broadcasting: 3 I0404 18:26:09.863793 7 log.go:172] (0xc002992160) Reply frame received for 3 I0404 18:26:09.863825 7 log.go:172] (0xc002992160) (0xc0016a6dc0) Create stream I0404 18:26:09.863836 7 log.go:172] (0xc002992160) (0xc0016a6dc0) Stream added, broadcasting: 5 I0404 18:26:09.864613 7 log.go:172] (0xc002992160) Reply frame received for 5 I0404 18:26:09.909744 7 log.go:172] (0xc002992160) Data frame received for 3 I0404 18:26:09.909786 7 log.go:172] (0xc0016a6c80) (3) Data frame handling I0404 18:26:09.909812 7 log.go:172] (0xc0016a6c80) (3) Data frame sent I0404 18:26:09.909823 7 log.go:172] (0xc002992160) Data frame received for 3 I0404 18:26:09.909831 7 log.go:172] (0xc0016a6c80) (3) Data frame handling I0404 18:26:09.909874 7 log.go:172] (0xc002992160) Data frame received for 5 I0404 18:26:09.909888 7 log.go:172] (0xc0016a6dc0) (5) Data frame handling I0404 18:26:09.911047 7 log.go:172] (0xc002992160) Data frame received for 1 I0404 18:26:09.911064 7 log.go:172] (0xc0016a6be0) (1) Data frame handling I0404 18:26:09.911080 7 log.go:172] (0xc0016a6be0) (1) Data frame sent I0404 18:26:09.911226 7 log.go:172] (0xc002992160) (0xc0016a6be0) Stream removed, broadcasting: 1 I0404 18:26:09.911260 7 log.go:172] (0xc002992160) Go away received I0404 18:26:09.911366 7 log.go:172] (0xc002992160) (0xc0016a6be0) Stream removed, broadcasting: 1 I0404 18:26:09.911399 7 log.go:172] (0xc002992160) (0xc0016a6c80) Stream removed, broadcasting: 3 I0404 18:26:09.911417 7 log.go:172] (0xc002992160) (0xc0016a6dc0) Stream removed, broadcasting: 5 Apr 4 18:26:09.911: INFO: Exec stderr: "" Apr 4 18:26:09.911: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6000 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 4 18:26:09.911: INFO: >>> kubeConfig: /root/.kube/config I0404 18:26:09.940677 7 log.go:172] (0xc0023ef6b0) (0xc000f1cf00) Create stream I0404 18:26:09.940706 7 log.go:172] (0xc0023ef6b0) (0xc000f1cf00) Stream added, broadcasting: 1 I0404 18:26:09.942250 7 log.go:172] (0xc0023ef6b0) Reply frame received for 1 I0404 18:26:09.942281 7 log.go:172] (0xc0023ef6b0) (0xc001211360) Create stream I0404 18:26:09.942298 7 log.go:172] (0xc0023ef6b0) (0xc001211360) Stream added, broadcasting: 3 I0404 18:26:09.942872 7 log.go:172] (0xc0023ef6b0) Reply frame received for 3 I0404 18:26:09.942922 7 log.go:172] (0xc0023ef6b0) (0xc001326140) Create stream I0404 18:26:09.942940 7 log.go:172] (0xc0023ef6b0) (0xc001326140) Stream added, broadcasting: 5 I0404 18:26:09.944048 7 log.go:172] (0xc0023ef6b0) Reply frame received for 5 I0404 18:26:09.993956 7 log.go:172] (0xc0023ef6b0) Data frame received for 5 I0404 18:26:09.993984 7 log.go:172] (0xc001326140) (5) Data frame handling I0404 18:26:09.994002 7 log.go:172] (0xc0023ef6b0) Data frame received for 3 I0404 18:26:09.994011 7 log.go:172] (0xc001211360) (3) Data frame handling I0404 18:26:09.994022 7 log.go:172] (0xc001211360) (3) Data frame sent I0404 18:26:09.994031 7 log.go:172] (0xc0023ef6b0) Data frame received for 3 I0404 18:26:09.994039 7 log.go:172] (0xc001211360) (3) Data frame handling I0404 18:26:09.995044 7 log.go:172] (0xc0023ef6b0) Data frame received for 1 I0404 18:26:09.995064 7 log.go:172] (0xc000f1cf00) (1) Data frame handling I0404 18:26:09.995072 7 log.go:172] (0xc000f1cf00) (1) Data frame sent I0404 18:26:09.995083 7 log.go:172] (0xc0023ef6b0) (0xc000f1cf00) Stream removed, broadcasting: 1 I0404 18:26:09.995138 7 log.go:172] (0xc0023ef6b0) (0xc000f1cf00) Stream removed, broadcasting: 1 I0404 18:26:09.995147 7 log.go:172] (0xc0023ef6b0) (0xc001211360) Stream removed, broadcasting: 3 I0404 18:26:09.995156 7 log.go:172] (0xc0023ef6b0) (0xc001326140) Stream removed, broadcasting: 5 Apr 4 18:26:09.995: INFO: Exec stderr: "" Apr 4 18:26:09.995: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6000 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 4 18:26:09.995: INFO: >>> kubeConfig: /root/.kube/config I0404 18:26:09.995235 7 log.go:172] (0xc0023ef6b0) Go away received I0404 18:26:10.023954 7 log.go:172] (0xc0023efd90) (0xc000f1d360) Create stream I0404 18:26:10.023978 7 log.go:172] (0xc0023efd90) (0xc000f1d360) Stream added, broadcasting: 1 I0404 18:26:10.025553 7 log.go:172] (0xc0023efd90) Reply frame received for 1 I0404 18:26:10.025601 7 log.go:172] (0xc0023efd90) (0xc000f1d7c0) Create stream I0404 18:26:10.025623 7 log.go:172] (0xc0023efd90) (0xc000f1d7c0) Stream added, broadcasting: 3 I0404 18:26:10.026378 7 log.go:172] (0xc0023efd90) Reply frame received for 3 I0404 18:26:10.026398 7 log.go:172] (0xc0023efd90) (0xc000f1dea0) Create stream I0404 18:26:10.026408 7 log.go:172] (0xc0023efd90) (0xc000f1dea0) Stream added, broadcasting: 5 I0404 18:26:10.027109 7 log.go:172] (0xc0023efd90) Reply frame received for 5 I0404 18:26:10.088048 7 log.go:172] (0xc0023efd90) Data frame received for 5 I0404 18:26:10.088066 7 log.go:172] (0xc000f1dea0) (5) Data frame handling I0404 18:26:10.088110 7 log.go:172] (0xc0023efd90) Data frame received for 3 I0404 18:26:10.088143 7 log.go:172] (0xc000f1d7c0) (3) Data frame handling I0404 18:26:10.088171 7 log.go:172] (0xc000f1d7c0) (3) Data frame sent I0404 18:26:10.088189 7 log.go:172] (0xc0023efd90) Data frame received for 3 I0404 18:26:10.088202 7 log.go:172] (0xc000f1d7c0) (3) Data frame handling I0404 18:26:10.089344 7 log.go:172] (0xc0023efd90) Data frame received for 1 I0404 18:26:10.089357 7 log.go:172] (0xc000f1d360) (1) Data frame handling I0404 18:26:10.089365 7 log.go:172] (0xc000f1d360) (1) Data frame sent I0404 18:26:10.089448 7 log.go:172] (0xc0023efd90) (0xc000f1d360) Stream removed, broadcasting: 1 I0404 18:26:10.089488 7 log.go:172] (0xc0023efd90) (0xc000f1d360) Stream removed, broadcasting: 1 I0404 18:26:10.089494 7 log.go:172] (0xc0023efd90) (0xc000f1d7c0) Stream removed, broadcasting: 3 I0404 18:26:10.089597 7 log.go:172] (0xc0023efd90) (0xc000f1dea0) Stream removed, broadcasting: 5 I0404 18:26:10.089668 7 log.go:172] (0xc0023efd90) Go away received Apr 4 18:26:10.089: INFO: Exec stderr: "" Apr 4 18:26:10.089: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6000 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 4 18:26:10.089: INFO: >>> kubeConfig: /root/.kube/config I0404 18:26:10.119610 7 log.go:172] (0xc00288e420) (0xc001326960) Create stream I0404 18:26:10.119649 7 log.go:172] (0xc00288e420) (0xc001326960) Stream added, broadcasting: 1 I0404 18:26:10.121964 7 log.go:172] (0xc00288e420) Reply frame received for 1 I0404 18:26:10.121996 7 log.go:172] (0xc00288e420) (0xc000d02280) Create stream I0404 18:26:10.122008 7 log.go:172] (0xc00288e420) (0xc000d02280) Stream added, broadcasting: 3 I0404 18:26:10.122825 7 log.go:172] (0xc00288e420) Reply frame received for 3 I0404 18:26:10.122879 7 log.go:172] (0xc00288e420) (0xc001211400) Create stream I0404 18:26:10.122917 7 log.go:172] (0xc00288e420) (0xc001211400) Stream added, broadcasting: 5 I0404 18:26:10.123670 7 log.go:172] (0xc00288e420) Reply frame received for 5 I0404 18:26:10.186267 7 log.go:172] (0xc00288e420) Data frame received for 3 I0404 18:26:10.186297 7 log.go:172] (0xc000d02280) (3) Data frame handling I0404 18:26:10.186309 7 log.go:172] (0xc000d02280) (3) Data frame sent I0404 18:26:10.186315 7 log.go:172] (0xc00288e420) Data frame received for 3 I0404 18:26:10.186327 7 log.go:172] (0xc000d02280) (3) Data frame handling I0404 18:26:10.186343 7 log.go:172] (0xc00288e420) Data frame received for 5 I0404 18:26:10.186349 7 log.go:172] (0xc001211400) (5) Data frame handling I0404 18:26:10.187497 7 log.go:172] (0xc00288e420) Data frame received for 1 I0404 18:26:10.187512 7 log.go:172] (0xc001326960) (1) Data frame handling I0404 18:26:10.187527 7 log.go:172] (0xc001326960) (1) Data frame sent I0404 18:26:10.187594 7 log.go:172] (0xc00288e420) (0xc001326960) Stream removed, broadcasting: 1 I0404 18:26:10.187674 7 log.go:172] (0xc00288e420) (0xc001326960) Stream removed, broadcasting: 1 I0404 18:26:10.187693 7 log.go:172] (0xc00288e420) (0xc000d02280) Stream removed, broadcasting: 3 I0404 18:26:10.187718 7 log.go:172] (0xc00288e420) Go away received I0404 18:26:10.187839 7 log.go:172] (0xc00288e420) (0xc001211400) Stream removed, broadcasting: 5 Apr 4 18:26:10.187: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Apr 4 18:26:10.187: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6000 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 4 18:26:10.187: INFO: >>> kubeConfig: /root/.kube/config I0404 18:26:10.216510 7 log.go:172] (0xc00288ebb0) (0xc001326dc0) Create stream I0404 18:26:10.216527 7 log.go:172] (0xc00288ebb0) (0xc001326dc0) Stream added, broadcasting: 1 I0404 18:26:10.218701 7 log.go:172] (0xc00288ebb0) Reply frame received for 1 I0404 18:26:10.218741 7 log.go:172] (0xc00288ebb0) (0xc000d023c0) Create stream I0404 18:26:10.218756 7 log.go:172] (0xc00288ebb0) (0xc000d023c0) Stream added, broadcasting: 3 I0404 18:26:10.219737 7 log.go:172] (0xc00288ebb0) Reply frame received for 3 I0404 18:26:10.219778 7 log.go:172] (0xc00288ebb0) (0xc0012114a0) Create stream I0404 18:26:10.219793 7 log.go:172] (0xc00288ebb0) (0xc0012114a0) Stream added, broadcasting: 5 I0404 18:26:10.220502 7 log.go:172] (0xc00288ebb0) Reply frame received for 5 I0404 18:26:10.292058 7 log.go:172] (0xc00288ebb0) Data frame received for 3 I0404 18:26:10.292124 7 log.go:172] (0xc000d023c0) (3) Data frame handling I0404 18:26:10.292138 7 log.go:172] (0xc000d023c0) (3) Data frame sent I0404 18:26:10.292146 7 log.go:172] (0xc00288ebb0) Data frame received for 3 I0404 18:26:10.292151 7 log.go:172] (0xc000d023c0) (3) Data frame handling I0404 18:26:10.292164 7 log.go:172] (0xc00288ebb0) Data frame received for 5 I0404 18:26:10.292171 7 log.go:172] (0xc0012114a0) (5) Data frame handling I0404 18:26:10.293479 7 log.go:172] (0xc00288ebb0) Data frame received for 1 I0404 18:26:10.293496 7 log.go:172] (0xc001326dc0) (1) Data frame handling I0404 18:26:10.293505 7 log.go:172] (0xc001326dc0) (1) Data frame sent I0404 18:26:10.293616 7 log.go:172] (0xc00288ebb0) (0xc001326dc0) Stream removed, broadcasting: 1 I0404 18:26:10.293697 7 log.go:172] (0xc00288ebb0) (0xc001326dc0) Stream removed, broadcasting: 1 I0404 18:26:10.293714 7 log.go:172] (0xc00288ebb0) (0xc000d023c0) Stream removed, broadcasting: 3 I0404 18:26:10.293856 7 log.go:172] (0xc00288ebb0) (0xc0012114a0) Stream removed, broadcasting: 5 Apr 4 18:26:10.293: INFO: Exec stderr: "" Apr 4 18:26:10.293: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6000 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 4 18:26:10.293: INFO: >>> kubeConfig: /root/.kube/config I0404 18:26:10.293965 7 log.go:172] (0xc00288ebb0) Go away received I0404 18:26:10.319476 7 log.go:172] (0xc00288f290) (0xc001326fa0) Create stream I0404 18:26:10.319522 7 log.go:172] (0xc00288f290) (0xc001326fa0) Stream added, broadcasting: 1 I0404 18:26:10.322232 7 log.go:172] (0xc00288f290) Reply frame received for 1 I0404 18:26:10.322269 7 log.go:172] (0xc00288f290) (0xc0013270e0) Create stream I0404 18:26:10.322285 7 log.go:172] (0xc00288f290) (0xc0013270e0) Stream added, broadcasting: 3 I0404 18:26:10.323859 7 log.go:172] (0xc00288f290) Reply frame received for 3 I0404 18:26:10.323909 7 log.go:172] (0xc00288f290) (0xc000538320) Create stream I0404 18:26:10.323928 7 log.go:172] (0xc00288f290) (0xc000538320) Stream added, broadcasting: 5 I0404 18:26:10.325817 7 log.go:172] (0xc00288f290) Reply frame received for 5 I0404 18:26:10.374160 7 log.go:172] (0xc00288f290) Data frame received for 5 I0404 18:26:10.374204 7 log.go:172] (0xc000538320) (5) Data frame handling I0404 18:26:10.374230 7 log.go:172] (0xc00288f290) Data frame received for 3 I0404 18:26:10.374286 7 log.go:172] (0xc0013270e0) (3) Data frame handling I0404 18:26:10.374312 7 log.go:172] (0xc0013270e0) (3) Data frame sent I0404 18:26:10.374330 7 log.go:172] (0xc00288f290) Data frame received for 3 I0404 18:26:10.374354 7 log.go:172] (0xc0013270e0) (3) Data frame handling I0404 18:26:10.375184 7 log.go:172] (0xc00288f290) Data frame received for 1 I0404 18:26:10.375210 7 log.go:172] (0xc001326fa0) (1) Data frame handling I0404 18:26:10.375225 7 log.go:172] (0xc001326fa0) (1) Data frame sent I0404 18:26:10.375344 7 log.go:172] (0xc00288f290) (0xc001326fa0) Stream removed, broadcasting: 1 I0404 18:26:10.375369 7 log.go:172] (0xc00288f290) Go away received I0404 18:26:10.375420 7 log.go:172] (0xc00288f290) (0xc001326fa0) Stream removed, broadcasting: 1 I0404 18:26:10.375439 7 log.go:172] (0xc00288f290) (0xc0013270e0) Stream removed, broadcasting: 3 I0404 18:26:10.375452 7 log.go:172] (0xc00288f290) (0xc000538320) Stream removed, broadcasting: 5 Apr 4 18:26:10.375: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Apr 4 18:26:10.375: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6000 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 4 18:26:10.375: INFO: >>> kubeConfig: /root/.kube/config I0404 18:26:10.408316 7 log.go:172] (0xc002d2c580) (0xc000d02780) Create stream I0404 18:26:10.408344 7 log.go:172] (0xc002d2c580) (0xc000d02780) Stream added, broadcasting: 1 I0404 18:26:10.410250 7 log.go:172] (0xc002d2c580) Reply frame received for 1 I0404 18:26:10.410291 7 log.go:172] (0xc002d2c580) (0xc001211680) Create stream I0404 18:26:10.410311 7 log.go:172] (0xc002d2c580) (0xc001211680) Stream added, broadcasting: 3 I0404 18:26:10.411000 7 log.go:172] (0xc002d2c580) Reply frame received for 3 I0404 18:26:10.411030 7 log.go:172] (0xc002d2c580) (0xc0016a6fa0) Create stream I0404 18:26:10.411042 7 log.go:172] (0xc002d2c580) (0xc0016a6fa0) Stream added, broadcasting: 5 I0404 18:26:10.411633 7 log.go:172] (0xc002d2c580) Reply frame received for 5 I0404 18:26:10.469222 7 log.go:172] (0xc002d2c580) Data frame received for 5 I0404 18:26:10.469274 7 log.go:172] (0xc002d2c580) Data frame received for 3 I0404 18:26:10.469325 7 log.go:172] (0xc001211680) (3) Data frame handling I0404 18:26:10.469345 7 log.go:172] (0xc001211680) (3) Data frame sent I0404 18:26:10.469362 7 log.go:172] (0xc002d2c580) Data frame received for 3 I0404 18:26:10.469379 7 log.go:172] (0xc001211680) (3) Data frame handling I0404 18:26:10.469427 7 log.go:172] (0xc0016a6fa0) (5) Data frame handling I0404 18:26:10.470648 7 log.go:172] (0xc002d2c580) Data frame received for 1 I0404 18:26:10.470672 7 log.go:172] (0xc000d02780) (1) Data frame handling I0404 18:26:10.470696 7 log.go:172] (0xc000d02780) (1) Data frame sent I0404 18:26:10.470715 7 log.go:172] (0xc002d2c580) (0xc000d02780) Stream removed, broadcasting: 1 I0404 18:26:10.470734 7 log.go:172] (0xc002d2c580) Go away received I0404 18:26:10.470857 7 log.go:172] (0xc002d2c580) (0xc000d02780) Stream removed, broadcasting: 1 I0404 18:26:10.470890 7 log.go:172] (0xc002d2c580) (0xc001211680) Stream removed, broadcasting: 3 I0404 18:26:10.470914 7 log.go:172] (0xc002d2c580) (0xc0016a6fa0) Stream removed, broadcasting: 5 Apr 4 18:26:10.470: INFO: Exec stderr: "" Apr 4 18:26:10.470: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6000 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 4 18:26:10.470: INFO: >>> kubeConfig: /root/.kube/config I0404 18:26:10.501433 7 log.go:172] (0xc00288f8c0) (0xc001327860) Create stream I0404 18:26:10.501462 7 log.go:172] (0xc00288f8c0) (0xc001327860) Stream added, broadcasting: 1 I0404 18:26:10.503853 7 log.go:172] (0xc00288f8c0) Reply frame received for 1 I0404 18:26:10.503907 7 log.go:172] (0xc00288f8c0) (0xc0013279a0) Create stream I0404 18:26:10.503936 7 log.go:172] (0xc00288f8c0) (0xc0013279a0) Stream added, broadcasting: 3 I0404 18:26:10.504837 7 log.go:172] (0xc00288f8c0) Reply frame received for 3 I0404 18:26:10.504865 7 log.go:172] (0xc00288f8c0) (0xc0012117c0) Create stream I0404 18:26:10.504875 7 log.go:172] (0xc00288f8c0) (0xc0012117c0) Stream added, broadcasting: 5 I0404 18:26:10.505724 7 log.go:172] (0xc00288f8c0) Reply frame received for 5 I0404 18:26:10.572186 7 log.go:172] (0xc00288f8c0) Data frame received for 5 I0404 18:26:10.572225 7 log.go:172] (0xc0012117c0) (5) Data frame handling I0404 18:26:10.572256 7 log.go:172] (0xc00288f8c0) Data frame received for 3 I0404 18:26:10.572269 7 log.go:172] (0xc0013279a0) (3) Data frame handling I0404 18:26:10.572282 7 log.go:172] (0xc0013279a0) (3) Data frame sent I0404 18:26:10.572294 7 log.go:172] (0xc00288f8c0) Data frame received for 3 I0404 18:26:10.572306 7 log.go:172] (0xc0013279a0) (3) Data frame handling I0404 18:26:10.572944 7 log.go:172] (0xc00288f8c0) Data frame received for 1 I0404 18:26:10.572964 7 log.go:172] (0xc001327860) (1) Data frame handling I0404 18:26:10.572973 7 log.go:172] (0xc001327860) (1) Data frame sent I0404 18:26:10.572987 7 log.go:172] (0xc00288f8c0) (0xc001327860) Stream removed, broadcasting: 1 I0404 18:26:10.573001 7 log.go:172] (0xc00288f8c0) Go away received I0404 18:26:10.573246 7 log.go:172] (0xc00288f8c0) (0xc001327860) Stream removed, broadcasting: 1 I0404 18:26:10.573269 7 log.go:172] (0xc00288f8c0) (0xc0013279a0) Stream removed, broadcasting: 3 I0404 18:26:10.573284 7 log.go:172] (0xc00288f8c0) (0xc0012117c0) Stream removed, broadcasting: 5 Apr 4 18:26:10.573: INFO: Exec stderr: "" Apr 4 18:26:10.573: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6000 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 4 18:26:10.573: INFO: >>> kubeConfig: /root/.kube/config I0404 18:26:10.598570 7 log.go:172] (0xc0022ee370) (0xc0005399a0) Create stream I0404 18:26:10.598589 7 log.go:172] (0xc0022ee370) (0xc0005399a0) Stream added, broadcasting: 1 I0404 18:26:10.600700 7 log.go:172] (0xc0022ee370) Reply frame received for 1 I0404 18:26:10.600736 7 log.go:172] (0xc0022ee370) (0xc000539c20) Create stream I0404 18:26:10.600746 7 log.go:172] (0xc0022ee370) (0xc000539c20) Stream added, broadcasting: 3 I0404 18:26:10.601793 7 log.go:172] (0xc0022ee370) Reply frame received for 3 I0404 18:26:10.601821 7 log.go:172] (0xc0022ee370) (0xc000539cc0) Create stream I0404 18:26:10.601832 7 log.go:172] (0xc0022ee370) (0xc000539cc0) Stream added, broadcasting: 5 I0404 18:26:10.602731 7 log.go:172] (0xc0022ee370) Reply frame received for 5 I0404 18:26:10.648261 7 log.go:172] (0xc0022ee370) Data frame received for 5 I0404 18:26:10.648313 7 log.go:172] (0xc000539cc0) (5) Data frame handling I0404 18:26:10.648344 7 log.go:172] (0xc0022ee370) Data frame received for 3 I0404 18:26:10.648370 7 log.go:172] (0xc000539c20) (3) Data frame handling I0404 18:26:10.648395 7 log.go:172] (0xc000539c20) (3) Data frame sent I0404 18:26:10.648415 7 log.go:172] (0xc0022ee370) Data frame received for 3 I0404 18:26:10.648437 7 log.go:172] (0xc000539c20) (3) Data frame handling I0404 18:26:10.650016 7 log.go:172] (0xc0022ee370) Data frame received for 1 I0404 18:26:10.650050 7 log.go:172] (0xc0005399a0) (1) Data frame handling I0404 18:26:10.650082 7 log.go:172] (0xc0005399a0) (1) Data frame sent I0404 18:26:10.650111 7 log.go:172] (0xc0022ee370) (0xc0005399a0) Stream removed, broadcasting: 1 I0404 18:26:10.650145 7 log.go:172] (0xc0022ee370) Go away received I0404 18:26:10.650202 7 log.go:172] (0xc0022ee370) (0xc0005399a0) Stream removed, broadcasting: 1 I0404 18:26:10.650215 7 log.go:172] (0xc0022ee370) (0xc000539c20) Stream removed, broadcasting: 3 I0404 18:26:10.650225 7 log.go:172] (0xc0022ee370) (0xc000539cc0) Stream removed, broadcasting: 5 Apr 4 18:26:10.650: INFO: Exec stderr: "" Apr 4 18:26:10.650: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6000 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 4 18:26:10.650: INFO: >>> kubeConfig: /root/.kube/config I0404 18:26:10.682513 7 log.go:172] (0xc002670580) (0xc0016a75e0) Create stream I0404 18:26:10.682539 7 log.go:172] (0xc002670580) (0xc0016a75e0) Stream added, broadcasting: 1 I0404 18:26:10.685006 7 log.go:172] (0xc002670580) Reply frame received for 1 I0404 18:26:10.685037 7 log.go:172] (0xc002670580) (0xc0012119a0) Create stream I0404 18:26:10.685056 7 log.go:172] (0xc002670580) (0xc0012119a0) Stream added, broadcasting: 3 I0404 18:26:10.686019 7 log.go:172] (0xc002670580) Reply frame received for 3 I0404 18:26:10.686081 7 log.go:172] (0xc002670580) (0xc000d028c0) Create stream I0404 18:26:10.686105 7 log.go:172] (0xc002670580) (0xc000d028c0) Stream added, broadcasting: 5 I0404 18:26:10.687296 7 log.go:172] (0xc002670580) Reply frame received for 5 I0404 18:26:10.739650 7 log.go:172] (0xc002670580) Data frame received for 3 I0404 18:26:10.739694 7 log.go:172] (0xc0012119a0) (3) Data frame handling I0404 18:26:10.739712 7 log.go:172] (0xc0012119a0) (3) Data frame sent I0404 18:26:10.739796 7 log.go:172] (0xc002670580) Data frame received for 5 I0404 18:26:10.739817 7 log.go:172] (0xc000d028c0) (5) Data frame handling I0404 18:26:10.739965 7 log.go:172] (0xc002670580) Data frame received for 3 I0404 18:26:10.739984 7 log.go:172] (0xc0012119a0) (3) Data frame handling I0404 18:26:10.741325 7 log.go:172] (0xc002670580) Data frame received for 1 I0404 18:26:10.741344 7 log.go:172] (0xc0016a75e0) (1) Data frame handling I0404 18:26:10.741368 7 log.go:172] (0xc0016a75e0) (1) Data frame sent I0404 18:26:10.741784 7 log.go:172] (0xc002670580) (0xc0016a75e0) Stream removed, broadcasting: 1 I0404 18:26:10.741810 7 log.go:172] (0xc002670580) Go away received I0404 18:26:10.741851 7 log.go:172] (0xc002670580) (0xc0016a75e0) Stream removed, broadcasting: 1 I0404 18:26:10.741866 7 log.go:172] (0xc002670580) (0xc0012119a0) Stream removed, broadcasting: 3 I0404 18:26:10.741874 7 log.go:172] (0xc002670580) (0xc000d028c0) Stream removed, broadcasting: 5 Apr 4 18:26:10.741: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:26:10.741: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-6000" for this suite. • [SLOW TEST:37.022 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":281,"completed":193,"skipped":3165,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:26:10.748: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-projected-9wxf STEP: Creating a pod to test atomic-volume-subpath Apr 4 18:26:10.842: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-9wxf" in namespace "subpath-5017" to be "Succeeded or Failed" Apr 4 18:26:10.852: INFO: Pod "pod-subpath-test-projected-9wxf": Phase="Pending", Reason="", readiness=false. Elapsed: 10.537379ms Apr 4 18:26:12.856: INFO: Pod "pod-subpath-test-projected-9wxf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014395907s Apr 4 18:26:14.859: INFO: Pod "pod-subpath-test-projected-9wxf": Phase="Running", Reason="", readiness=true. Elapsed: 4.01731005s Apr 4 18:26:16.861: INFO: Pod "pod-subpath-test-projected-9wxf": Phase="Running", Reason="", readiness=true. Elapsed: 6.019664673s Apr 4 18:26:18.866: INFO: Pod "pod-subpath-test-projected-9wxf": Phase="Running", Reason="", readiness=true. Elapsed: 8.023765533s Apr 4 18:26:20.870: INFO: Pod "pod-subpath-test-projected-9wxf": Phase="Running", Reason="", readiness=true. Elapsed: 10.027709333s Apr 4 18:26:22.873: INFO: Pod "pod-subpath-test-projected-9wxf": Phase="Running", Reason="", readiness=true. Elapsed: 12.031228951s Apr 4 18:26:24.876: INFO: Pod "pod-subpath-test-projected-9wxf": Phase="Running", Reason="", readiness=true. Elapsed: 14.034310727s Apr 4 18:26:26.880: INFO: Pod "pod-subpath-test-projected-9wxf": Phase="Running", Reason="", readiness=true. Elapsed: 16.038427837s Apr 4 18:26:28.884: INFO: Pod "pod-subpath-test-projected-9wxf": Phase="Running", Reason="", readiness=true. Elapsed: 18.042445971s Apr 4 18:26:30.887: INFO: Pod "pod-subpath-test-projected-9wxf": Phase="Running", Reason="", readiness=true. Elapsed: 20.045402303s Apr 4 18:26:32.891: INFO: Pod "pod-subpath-test-projected-9wxf": Phase="Running", Reason="", readiness=true. Elapsed: 22.04921584s Apr 4 18:26:34.946: INFO: Pod "pod-subpath-test-projected-9wxf": Phase="Running", Reason="", readiness=true. Elapsed: 24.103972725s Apr 4 18:26:37.032: INFO: Pod "pod-subpath-test-projected-9wxf": Phase="Running", Reason="", readiness=true. Elapsed: 26.190530549s Apr 4 18:26:39.654: INFO: Pod "pod-subpath-test-projected-9wxf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.81169791s STEP: Saw pod success Apr 4 18:26:39.654: INFO: Pod "pod-subpath-test-projected-9wxf" satisfied condition "Succeeded or Failed" Apr 4 18:26:39.657: INFO: Trying to get logs from node latest-worker2 pod pod-subpath-test-projected-9wxf container test-container-subpath-projected-9wxf: STEP: delete the pod Apr 4 18:26:39.837: INFO: Waiting for pod pod-subpath-test-projected-9wxf to disappear Apr 4 18:26:39.839: INFO: Pod pod-subpath-test-projected-9wxf no longer exists STEP: Deleting pod pod-subpath-test-projected-9wxf Apr 4 18:26:39.839: INFO: Deleting pod "pod-subpath-test-projected-9wxf" in namespace "subpath-5017" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:26:39.840: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-5017" for this suite. • [SLOW TEST:29.100 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":281,"completed":194,"skipped":3222,"failed":0} SSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:26:39.848: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test substitution in container's command Apr 4 18:26:39.902: INFO: Waiting up to 5m0s for pod "var-expansion-14eeacab-00db-48b8-ad1a-0501251f7aa7" in namespace "var-expansion-3358" to be "Succeeded or Failed" Apr 4 18:26:39.916: INFO: Pod "var-expansion-14eeacab-00db-48b8-ad1a-0501251f7aa7": Phase="Pending", Reason="", readiness=false. Elapsed: 14.083101ms Apr 4 18:26:41.927: INFO: Pod "var-expansion-14eeacab-00db-48b8-ad1a-0501251f7aa7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025851549s Apr 4 18:26:44.066: INFO: Pod "var-expansion-14eeacab-00db-48b8-ad1a-0501251f7aa7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.16424865s Apr 4 18:26:46.288: INFO: Pod "var-expansion-14eeacab-00db-48b8-ad1a-0501251f7aa7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.385913978s Apr 4 18:26:49.003: INFO: Pod "var-expansion-14eeacab-00db-48b8-ad1a-0501251f7aa7": Phase="Pending", Reason="", readiness=false. Elapsed: 9.101535823s Apr 4 18:26:51.227: INFO: Pod "var-expansion-14eeacab-00db-48b8-ad1a-0501251f7aa7": Phase="Pending", Reason="", readiness=false. Elapsed: 11.325391485s Apr 4 18:26:53.928: INFO: Pod "var-expansion-14eeacab-00db-48b8-ad1a-0501251f7aa7": Phase="Pending", Reason="", readiness=false. Elapsed: 14.026860377s Apr 4 18:26:55.945: INFO: Pod "var-expansion-14eeacab-00db-48b8-ad1a-0501251f7aa7": Phase="Running", Reason="", readiness=true. Elapsed: 16.04313115s Apr 4 18:26:58.011: INFO: Pod "var-expansion-14eeacab-00db-48b8-ad1a-0501251f7aa7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.10899023s STEP: Saw pod success Apr 4 18:26:58.011: INFO: Pod "var-expansion-14eeacab-00db-48b8-ad1a-0501251f7aa7" satisfied condition "Succeeded or Failed" Apr 4 18:26:58.028: INFO: Trying to get logs from node latest-worker2 pod var-expansion-14eeacab-00db-48b8-ad1a-0501251f7aa7 container dapi-container: STEP: delete the pod Apr 4 18:26:58.047: INFO: Waiting for pod var-expansion-14eeacab-00db-48b8-ad1a-0501251f7aa7 to disappear Apr 4 18:26:58.064: INFO: Pod var-expansion-14eeacab-00db-48b8-ad1a-0501251f7aa7 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:26:58.064: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-3358" for this suite. • [SLOW TEST:18.227 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":281,"completed":195,"skipped":3227,"failed":0} SSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:26:58.076: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:26:58.520: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1139" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 •{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":281,"completed":196,"skipped":3231,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:26:58.538: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Apr 4 18:26:58.607: INFO: (0) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.316515ms) Apr 4 18:26:58.609: INFO: (1) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.29849ms) Apr 4 18:26:58.611: INFO: (2) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.00693ms) Apr 4 18:26:58.613: INFO: (3) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 1.995644ms) Apr 4 18:26:58.616: INFO: (4) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.000997ms) Apr 4 18:26:58.618: INFO: (5) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.38882ms) Apr 4 18:26:58.640: INFO: (6) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 21.383534ms) Apr 4 18:26:58.643: INFO: (7) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.70078ms) Apr 4 18:26:58.645: INFO: (8) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.487434ms) Apr 4 18:26:58.648: INFO: (9) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.537076ms) Apr 4 18:26:58.650: INFO: (10) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.47104ms) Apr 4 18:26:58.653: INFO: (11) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.585671ms) Apr 4 18:26:58.656: INFO: (12) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.872348ms) Apr 4 18:26:58.658: INFO: (13) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.772037ms) Apr 4 18:26:58.661: INFO: (14) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.350497ms) Apr 4 18:26:58.663: INFO: (15) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.239204ms) Apr 4 18:26:58.666: INFO: (16) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.373892ms) Apr 4 18:26:58.675: INFO: (17) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 9.90334ms) Apr 4 18:26:58.678: INFO: (18) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.469595ms) Apr 4 18:26:58.680: INFO: (19) /api/v1/nodes/latest-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.338569ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:26:58.680: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-3864" for this suite. •{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance]","total":281,"completed":197,"skipped":3258,"failed":0} SSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:26:58.685: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod Apr 4 18:26:58.779: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:27:04.449: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-8137" for this suite. • [SLOW TEST:5.815 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":281,"completed":198,"skipped":3266,"failed":0} S ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:27:04.500: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting the auto-created API token STEP: reading a file in the container Apr 4 18:27:09.102: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-8312 pod-service-account-0c84d503-e200-4b0b-849b-0f3a6835dbda -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Apr 4 18:27:11.952: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-8312 pod-service-account-0c84d503-e200-4b0b-849b-0f3a6835dbda -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Apr 4 18:27:12.135: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-8312 pod-service-account-0c84d503-e200-4b0b-849b-0f3a6835dbda -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:27:12.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-8312" for this suite. • [SLOW TEST:7.816 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":281,"completed":199,"skipped":3267,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:27:12.316: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:180 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating pod Apr 4 18:27:18.406: INFO: Pod pod-hostip-c5ce7553-40a5-4e78-9315-4e7a02e95e08 has hostIP: 172.17.0.13 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:27:18.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5398" for this suite. • [SLOW TEST:6.096 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":281,"completed":200,"skipped":3303,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:27:18.414: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a Namespace [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a Namespace STEP: patching the Namespace STEP: get the Namespace and ensuring it has the label [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:27:18.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-4686" for this suite. STEP: Destroying namespace "nspatchtest-baf7163e-727c-453f-a966-e225df976814-2309" for this suite. •{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":281,"completed":201,"skipped":3393,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:27:18.545: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0404 18:28:02.663876 7 metrics_grabber.go:94] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 4 18:28:02.663: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:28:02.663: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3116" for this suite. • [SLOW TEST:44.126 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":281,"completed":202,"skipped":3409,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:28:02.671: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:28:06.805: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-4607" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":281,"completed":203,"skipped":3435,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:28:06.812: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-7933 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a new StatefulSet Apr 4 18:28:07.060: INFO: Found 0 stateful pods, waiting for 3 Apr 4 18:28:17.301: INFO: Found 2 stateful pods, waiting for 3 Apr 4 18:28:27.067: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 4 18:28:27.067: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 4 18:28:27.067: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Apr 4 18:28:37.062: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 4 18:28:37.062: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 4 18:28:37.062: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Apr 4 18:28:37.068: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7933 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 4 18:28:37.481: INFO: stderr: "I0404 18:28:37.173345 3457 log.go:172] (0xc000acf600) (0xc000ac6820) Create stream\nI0404 18:28:37.173384 3457 log.go:172] (0xc000acf600) (0xc000ac6820) Stream added, broadcasting: 1\nI0404 18:28:37.175361 3457 log.go:172] (0xc000acf600) Reply frame received for 1\nI0404 18:28:37.175389 3457 log.go:172] (0xc000acf600) (0xc000a5a000) Create stream\nI0404 18:28:37.175398 3457 log.go:172] (0xc000acf600) (0xc000a5a000) Stream added, broadcasting: 3\nI0404 18:28:37.176107 3457 log.go:172] (0xc000acf600) Reply frame received for 3\nI0404 18:28:37.176153 3457 log.go:172] (0xc000acf600) (0xc0009ac000) Create stream\nI0404 18:28:37.176209 3457 log.go:172] (0xc000acf600) (0xc0009ac000) Stream added, broadcasting: 5\nI0404 18:28:37.176991 3457 log.go:172] (0xc000acf600) Reply frame received for 5\nI0404 18:28:37.238027 3457 log.go:172] (0xc000acf600) Data frame received for 5\nI0404 18:28:37.238053 3457 log.go:172] (0xc0009ac000) (5) Data frame handling\nI0404 18:28:37.238072 3457 log.go:172] (0xc0009ac000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0404 18:28:37.474772 3457 log.go:172] (0xc000acf600) Data frame received for 5\nI0404 18:28:37.474795 3457 log.go:172] (0xc0009ac000) (5) Data frame handling\nI0404 18:28:37.474809 3457 log.go:172] (0xc000acf600) Data frame received for 3\nI0404 18:28:37.474814 3457 log.go:172] (0xc000a5a000) (3) Data frame handling\nI0404 18:28:37.474821 3457 log.go:172] (0xc000a5a000) (3) Data frame sent\nI0404 18:28:37.474827 3457 log.go:172] (0xc000acf600) Data frame received for 3\nI0404 18:28:37.474832 3457 log.go:172] (0xc000a5a000) (3) Data frame handling\nI0404 18:28:37.476532 3457 log.go:172] (0xc000acf600) Data frame received for 1\nI0404 18:28:37.476552 3457 log.go:172] (0xc000ac6820) (1) Data frame handling\nI0404 18:28:37.476571 3457 log.go:172] (0xc000ac6820) (1) Data frame sent\nI0404 18:28:37.476580 3457 log.go:172] (0xc000acf600) (0xc000ac6820) Stream removed, broadcasting: 1\nI0404 18:28:37.476590 3457 log.go:172] (0xc000acf600) Go away received\nI0404 18:28:37.477021 3457 log.go:172] (0xc000acf600) (0xc000ac6820) Stream removed, broadcasting: 1\nI0404 18:28:37.477060 3457 log.go:172] (0xc000acf600) (0xc000a5a000) Stream removed, broadcasting: 3\nI0404 18:28:37.477094 3457 log.go:172] (0xc000acf600) (0xc0009ac000) Stream removed, broadcasting: 5\n" Apr 4 18:28:37.481: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 4 18:28:37.481: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Apr 4 18:28:47.521: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Apr 4 18:28:57.568: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7933 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 4 18:28:57.747: INFO: stderr: "I0404 18:28:57.677059 3474 log.go:172] (0xc00078ca50) (0xc000774320) Create stream\nI0404 18:28:57.677229 3474 log.go:172] (0xc00078ca50) (0xc000774320) Stream added, broadcasting: 1\nI0404 18:28:57.679503 3474 log.go:172] (0xc00078ca50) Reply frame received for 1\nI0404 18:28:57.679529 3474 log.go:172] (0xc00078ca50) (0xc0007743c0) Create stream\nI0404 18:28:57.679538 3474 log.go:172] (0xc00078ca50) (0xc0007743c0) Stream added, broadcasting: 3\nI0404 18:28:57.680571 3474 log.go:172] (0xc00078ca50) Reply frame received for 3\nI0404 18:28:57.680601 3474 log.go:172] (0xc00078ca50) (0xc000790000) Create stream\nI0404 18:28:57.680612 3474 log.go:172] (0xc00078ca50) (0xc000790000) Stream added, broadcasting: 5\nI0404 18:28:57.681553 3474 log.go:172] (0xc00078ca50) Reply frame received for 5\nI0404 18:28:57.740829 3474 log.go:172] (0xc00078ca50) Data frame received for 3\nI0404 18:28:57.740871 3474 log.go:172] (0xc0007743c0) (3) Data frame handling\nI0404 18:28:57.740882 3474 log.go:172] (0xc0007743c0) (3) Data frame sent\nI0404 18:28:57.740889 3474 log.go:172] (0xc00078ca50) Data frame received for 3\nI0404 18:28:57.740895 3474 log.go:172] (0xc0007743c0) (3) Data frame handling\nI0404 18:28:57.740914 3474 log.go:172] (0xc00078ca50) Data frame received for 5\nI0404 18:28:57.740923 3474 log.go:172] (0xc000790000) (5) Data frame handling\nI0404 18:28:57.740934 3474 log.go:172] (0xc000790000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0404 18:28:57.741041 3474 log.go:172] (0xc00078ca50) Data frame received for 5\nI0404 18:28:57.741060 3474 log.go:172] (0xc000790000) (5) Data frame handling\nI0404 18:28:57.742759 3474 log.go:172] (0xc00078ca50) Data frame received for 1\nI0404 18:28:57.742774 3474 log.go:172] (0xc000774320) (1) Data frame handling\nI0404 18:28:57.742781 3474 log.go:172] (0xc000774320) (1) Data frame sent\nI0404 18:28:57.742792 3474 log.go:172] (0xc00078ca50) (0xc000774320) Stream removed, broadcasting: 1\nI0404 18:28:57.742831 3474 log.go:172] (0xc00078ca50) Go away received\nI0404 18:28:57.743018 3474 log.go:172] (0xc00078ca50) (0xc000774320) Stream removed, broadcasting: 1\nI0404 18:28:57.743030 3474 log.go:172] (0xc00078ca50) (0xc0007743c0) Stream removed, broadcasting: 3\nI0404 18:28:57.743037 3474 log.go:172] (0xc00078ca50) (0xc000790000) Stream removed, broadcasting: 5\n" Apr 4 18:28:57.747: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 4 18:28:57.747: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 4 18:29:27.766: INFO: Waiting for StatefulSet statefulset-7933/ss2 to complete update Apr 4 18:29:27.766: INFO: Waiting for Pod statefulset-7933/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Apr 4 18:29:38.093: INFO: Waiting for StatefulSet statefulset-7933/ss2 to complete update STEP: Rolling back to a previous revision Apr 4 18:29:48.141: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7933 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 4 18:29:48.983: INFO: stderr: "I0404 18:29:48.434659 3494 log.go:172] (0xc00076e160) (0xc0006baf00) Create stream\nI0404 18:29:48.434746 3494 log.go:172] (0xc00076e160) (0xc0006baf00) Stream added, broadcasting: 1\nI0404 18:29:48.437587 3494 log.go:172] (0xc00076e160) Reply frame received for 1\nI0404 18:29:48.437631 3494 log.go:172] (0xc00076e160) (0xc0004e8960) Create stream\nI0404 18:29:48.437646 3494 log.go:172] (0xc00076e160) (0xc0004e8960) Stream added, broadcasting: 3\nI0404 18:29:48.438475 3494 log.go:172] (0xc00076e160) Reply frame received for 3\nI0404 18:29:48.438495 3494 log.go:172] (0xc00076e160) (0xc0005701e0) Create stream\nI0404 18:29:48.438502 3494 log.go:172] (0xc00076e160) (0xc0005701e0) Stream added, broadcasting: 5\nI0404 18:29:48.439442 3494 log.go:172] (0xc00076e160) Reply frame received for 5\nI0404 18:29:48.532672 3494 log.go:172] (0xc00076e160) Data frame received for 5\nI0404 18:29:48.532697 3494 log.go:172] (0xc0005701e0) (5) Data frame handling\nI0404 18:29:48.532712 3494 log.go:172] (0xc0005701e0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0404 18:29:48.977756 3494 log.go:172] (0xc00076e160) Data frame received for 3\nI0404 18:29:48.977798 3494 log.go:172] (0xc0004e8960) (3) Data frame handling\nI0404 18:29:48.977821 3494 log.go:172] (0xc0004e8960) (3) Data frame sent\nI0404 18:29:48.977839 3494 log.go:172] (0xc00076e160) Data frame received for 3\nI0404 18:29:48.977855 3494 log.go:172] (0xc0004e8960) (3) Data frame handling\nI0404 18:29:48.977973 3494 log.go:172] (0xc00076e160) Data frame received for 5\nI0404 18:29:48.977992 3494 log.go:172] (0xc0005701e0) (5) Data frame handling\nI0404 18:29:48.979369 3494 log.go:172] (0xc00076e160) Data frame received for 1\nI0404 18:29:48.979379 3494 log.go:172] (0xc0006baf00) (1) Data frame handling\nI0404 18:29:48.979384 3494 log.go:172] (0xc0006baf00) (1) Data frame sent\nI0404 18:29:48.979555 3494 log.go:172] (0xc00076e160) (0xc0006baf00) Stream removed, broadcasting: 1\nI0404 18:29:48.979662 3494 log.go:172] (0xc00076e160) Go away received\nI0404 18:29:48.979761 3494 log.go:172] (0xc00076e160) (0xc0006baf00) Stream removed, broadcasting: 1\nI0404 18:29:48.979773 3494 log.go:172] (0xc00076e160) (0xc0004e8960) Stream removed, broadcasting: 3\nI0404 18:29:48.979780 3494 log.go:172] (0xc00076e160) (0xc0005701e0) Stream removed, broadcasting: 5\n" Apr 4 18:29:48.983: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 4 18:29:48.983: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 4 18:29:59.009: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Apr 4 18:30:09.051: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-7933 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 4 18:30:09.252: INFO: stderr: "I0404 18:30:09.173974 3514 log.go:172] (0xc000ae2580) (0xc0005e5360) Create stream\nI0404 18:30:09.174024 3514 log.go:172] (0xc000ae2580) (0xc0005e5360) Stream added, broadcasting: 1\nI0404 18:30:09.176028 3514 log.go:172] (0xc000ae2580) Reply frame received for 1\nI0404 18:30:09.176088 3514 log.go:172] (0xc000ae2580) (0xc000412a00) Create stream\nI0404 18:30:09.176114 3514 log.go:172] (0xc000ae2580) (0xc000412a00) Stream added, broadcasting: 3\nI0404 18:30:09.176966 3514 log.go:172] (0xc000ae2580) Reply frame received for 3\nI0404 18:30:09.176982 3514 log.go:172] (0xc000ae2580) (0xc000412aa0) Create stream\nI0404 18:30:09.176988 3514 log.go:172] (0xc000ae2580) (0xc000412aa0) Stream added, broadcasting: 5\nI0404 18:30:09.177801 3514 log.go:172] (0xc000ae2580) Reply frame received for 5\nI0404 18:30:09.245752 3514 log.go:172] (0xc000ae2580) Data frame received for 5\nI0404 18:30:09.245800 3514 log.go:172] (0xc000412aa0) (5) Data frame handling\nI0404 18:30:09.245812 3514 log.go:172] (0xc000412aa0) (5) Data frame sent\nI0404 18:30:09.245822 3514 log.go:172] (0xc000ae2580) Data frame received for 5\nI0404 18:30:09.245832 3514 log.go:172] (0xc000412aa0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0404 18:30:09.245869 3514 log.go:172] (0xc000ae2580) Data frame received for 3\nI0404 18:30:09.245891 3514 log.go:172] (0xc000412a00) (3) Data frame handling\nI0404 18:30:09.245910 3514 log.go:172] (0xc000412a00) (3) Data frame sent\nI0404 18:30:09.245920 3514 log.go:172] (0xc000ae2580) Data frame received for 3\nI0404 18:30:09.245926 3514 log.go:172] (0xc000412a00) (3) Data frame handling\nI0404 18:30:09.248580 3514 log.go:172] (0xc000ae2580) Data frame received for 1\nI0404 18:30:09.248612 3514 log.go:172] (0xc0005e5360) (1) Data frame handling\nI0404 18:30:09.248628 3514 log.go:172] (0xc0005e5360) (1) Data frame sent\nI0404 18:30:09.248641 3514 log.go:172] (0xc000ae2580) (0xc0005e5360) Stream removed, broadcasting: 1\nI0404 18:30:09.248649 3514 log.go:172] (0xc000ae2580) Go away received\nI0404 18:30:09.248984 3514 log.go:172] (0xc000ae2580) (0xc0005e5360) Stream removed, broadcasting: 1\nI0404 18:30:09.249004 3514 log.go:172] (0xc000ae2580) (0xc000412a00) Stream removed, broadcasting: 3\nI0404 18:30:09.249013 3514 log.go:172] (0xc000ae2580) (0xc000412aa0) Stream removed, broadcasting: 5\n" Apr 4 18:30:09.252: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 4 18:30:09.252: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 4 18:30:19.267: INFO: Waiting for StatefulSet statefulset-7933/ss2 to complete update Apr 4 18:30:19.267: INFO: Waiting for Pod statefulset-7933/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Apr 4 18:30:19.267: INFO: Waiting for Pod statefulset-7933/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Apr 4 18:30:19.267: INFO: Waiting for Pod statefulset-7933/ss2-2 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Apr 4 18:30:29.279: INFO: Waiting for StatefulSet statefulset-7933/ss2 to complete update Apr 4 18:30:29.279: INFO: Waiting for Pod statefulset-7933/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Apr 4 18:30:29.279: INFO: Waiting for Pod statefulset-7933/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Apr 4 18:30:39.275: INFO: Waiting for StatefulSet statefulset-7933/ss2 to complete update Apr 4 18:30:39.275: INFO: Waiting for Pod statefulset-7933/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Apr 4 18:30:39.275: INFO: Waiting for Pod statefulset-7933/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Apr 4 18:30:49.274: INFO: Waiting for StatefulSet statefulset-7933/ss2 to complete update Apr 4 18:30:49.274: INFO: Waiting for Pod statefulset-7933/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Apr 4 18:30:49.274: INFO: Waiting for Pod statefulset-7933/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Apr 4 18:30:59.316: INFO: Waiting for StatefulSet statefulset-7933/ss2 to complete update Apr 4 18:30:59.316: INFO: Waiting for Pod statefulset-7933/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Apr 4 18:31:09.274: INFO: Waiting for StatefulSet statefulset-7933/ss2 to complete update Apr 4 18:31:09.274: INFO: Waiting for Pod statefulset-7933/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Apr 4 18:31:19.273: INFO: Waiting for StatefulSet statefulset-7933/ss2 to complete update Apr 4 18:31:29.275: INFO: Waiting for StatefulSet statefulset-7933/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Apr 4 18:31:39.273: INFO: Deleting all statefulset in ns statefulset-7933 Apr 4 18:31:39.276: INFO: Scaling statefulset ss2 to 0 Apr 4 18:32:29.293: INFO: Waiting for statefulset status.replicas updated to 0 Apr 4 18:32:29.295: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:32:29.325: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-7933" for this suite. • [SLOW TEST:262.518 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":281,"completed":204,"skipped":3450,"failed":0} SSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:32:29.330: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test override all Apr 4 18:32:29.425: INFO: Waiting up to 5m0s for pod "client-containers-d6cfe786-04e7-49df-b6f9-d23c3b2662ed" in namespace "containers-1725" to be "Succeeded or Failed" Apr 4 18:32:29.433: INFO: Pod "client-containers-d6cfe786-04e7-49df-b6f9-d23c3b2662ed": Phase="Pending", Reason="", readiness=false. Elapsed: 8.757318ms Apr 4 18:32:31.438: INFO: Pod "client-containers-d6cfe786-04e7-49df-b6f9-d23c3b2662ed": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012983801s Apr 4 18:32:33.441: INFO: Pod "client-containers-d6cfe786-04e7-49df-b6f9-d23c3b2662ed": Phase="Pending", Reason="", readiness=false. Elapsed: 4.016492603s Apr 4 18:32:35.580: INFO: Pod "client-containers-d6cfe786-04e7-49df-b6f9-d23c3b2662ed": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.155648638s STEP: Saw pod success Apr 4 18:32:35.580: INFO: Pod "client-containers-d6cfe786-04e7-49df-b6f9-d23c3b2662ed" satisfied condition "Succeeded or Failed" Apr 4 18:32:35.782: INFO: Trying to get logs from node latest-worker pod client-containers-d6cfe786-04e7-49df-b6f9-d23c3b2662ed container test-container: STEP: delete the pod Apr 4 18:32:35.878: INFO: Waiting for pod client-containers-d6cfe786-04e7-49df-b6f9-d23c3b2662ed to disappear Apr 4 18:32:35.926: INFO: Pod client-containers-d6cfe786-04e7-49df-b6f9-d23c3b2662ed no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:32:35.926: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-1725" for this suite. • [SLOW TEST:6.643 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":281,"completed":205,"skipped":3455,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:32:35.974: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-33dbf4ba-50b2-4762-b89c-15d449c988c9 STEP: Creating a pod to test consume configMaps Apr 4 18:32:36.644: INFO: Waiting up to 5m0s for pod "pod-configmaps-1879a91e-2c55-467a-bfba-b2c9e69cd432" in namespace "configmap-3172" to be "Succeeded or Failed" Apr 4 18:32:36.685: INFO: Pod "pod-configmaps-1879a91e-2c55-467a-bfba-b2c9e69cd432": Phase="Pending", Reason="", readiness=false. Elapsed: 41.85718ms Apr 4 18:32:38.984: INFO: Pod "pod-configmaps-1879a91e-2c55-467a-bfba-b2c9e69cd432": Phase="Pending", Reason="", readiness=false. Elapsed: 2.340103087s Apr 4 18:32:40.986: INFO: Pod "pod-configmaps-1879a91e-2c55-467a-bfba-b2c9e69cd432": Phase="Pending", Reason="", readiness=false. Elapsed: 4.342374721s Apr 4 18:32:43.046: INFO: Pod "pod-configmaps-1879a91e-2c55-467a-bfba-b2c9e69cd432": Phase="Pending", Reason="", readiness=false. Elapsed: 6.402434472s Apr 4 18:32:45.050: INFO: Pod "pod-configmaps-1879a91e-2c55-467a-bfba-b2c9e69cd432": Phase="Pending", Reason="", readiness=false. Elapsed: 8.406053561s Apr 4 18:32:47.053: INFO: Pod "pod-configmaps-1879a91e-2c55-467a-bfba-b2c9e69cd432": Phase="Running", Reason="", readiness=true. Elapsed: 10.40931515s Apr 4 18:32:49.056: INFO: Pod "pod-configmaps-1879a91e-2c55-467a-bfba-b2c9e69cd432": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.412456381s STEP: Saw pod success Apr 4 18:32:49.056: INFO: Pod "pod-configmaps-1879a91e-2c55-467a-bfba-b2c9e69cd432" satisfied condition "Succeeded or Failed" Apr 4 18:32:49.059: INFO: Trying to get logs from node latest-worker2 pod pod-configmaps-1879a91e-2c55-467a-bfba-b2c9e69cd432 container configmap-volume-test: STEP: delete the pod Apr 4 18:32:49.160: INFO: Waiting for pod pod-configmaps-1879a91e-2c55-467a-bfba-b2c9e69cd432 to disappear Apr 4 18:32:49.204: INFO: Pod pod-configmaps-1879a91e-2c55-467a-bfba-b2c9e69cd432 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:32:49.205: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3172" for this suite. • [SLOW TEST:13.238 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":281,"completed":206,"skipped":3474,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:32:49.213: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Apr 4 18:32:49.465: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-9124 I0404 18:32:49.515069 7 runners.go:190] Created replication controller with name: svc-latency-rc, namespace: svc-latency-9124, replica count: 1 I0404 18:32:50.565400 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0404 18:32:51.565692 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0404 18:32:52.565943 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0404 18:32:53.566122 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0404 18:32:54.566316 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0404 18:32:55.566570 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0404 18:32:56.566795 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0404 18:32:57.567010 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0404 18:32:58.567230 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0404 18:32:59.567413 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0404 18:33:00.567641 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0404 18:33:01.567842 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0404 18:33:02.568081 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0404 18:33:03.568296 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0404 18:33:04.568507 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0404 18:33:05.568741 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0404 18:33:06.568947 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0404 18:33:07.569294 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0404 18:33:08.569511 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 4 18:33:08.696: INFO: Created: latency-svc-dp2c6 Apr 4 18:33:08.716: INFO: Got endpoints: latency-svc-dp2c6 [46.990463ms] Apr 4 18:33:08.738: INFO: Created: latency-svc-vbv24 Apr 4 18:33:08.764: INFO: Got endpoints: latency-svc-vbv24 [47.766086ms] Apr 4 18:33:08.785: INFO: Created: latency-svc-kq2w9 Apr 4 18:33:08.811: INFO: Got endpoints: latency-svc-kq2w9 [94.984275ms] Apr 4 18:33:08.833: INFO: Created: latency-svc-2w9rf Apr 4 18:33:08.848: INFO: Got endpoints: latency-svc-2w9rf [131.114571ms] Apr 4 18:33:08.902: INFO: Created: latency-svc-zdlbc Apr 4 18:33:08.930: INFO: Got endpoints: latency-svc-zdlbc [213.253689ms] Apr 4 18:33:08.931: INFO: Created: latency-svc-dc7v5 Apr 4 18:33:08.953: INFO: Got endpoints: latency-svc-dc7v5 [237.059378ms] Apr 4 18:33:08.972: INFO: Created: latency-svc-xrd42 Apr 4 18:33:08.990: INFO: Got endpoints: latency-svc-xrd42 [273.097814ms] Apr 4 18:33:09.040: INFO: Created: latency-svc-qzzz5 Apr 4 18:33:09.073: INFO: Created: latency-svc-hxwgr Apr 4 18:33:09.073: INFO: Got endpoints: latency-svc-qzzz5 [357.049715ms] Apr 4 18:33:09.116: INFO: Got endpoints: latency-svc-hxwgr [399.263042ms] Apr 4 18:33:09.187: INFO: Created: latency-svc-2lnj6 Apr 4 18:33:09.253: INFO: Got endpoints: latency-svc-2lnj6 [536.031813ms] Apr 4 18:33:09.253: INFO: Created: latency-svc-cgsnn Apr 4 18:33:09.283: INFO: Got endpoints: latency-svc-cgsnn [566.242416ms] Apr 4 18:33:09.339: INFO: Created: latency-svc-64x5s Apr 4 18:33:09.343: INFO: Got endpoints: latency-svc-64x5s [626.845049ms] Apr 4 18:33:09.373: INFO: Created: latency-svc-vfjn8 Apr 4 18:33:09.415: INFO: Got endpoints: latency-svc-vfjn8 [698.746716ms] Apr 4 18:33:09.476: INFO: Created: latency-svc-mttv4 Apr 4 18:33:09.494: INFO: Got endpoints: latency-svc-mttv4 [777.803651ms] Apr 4 18:33:09.495: INFO: Created: latency-svc-rjhf7 Apr 4 18:33:09.506: INFO: Got endpoints: latency-svc-rjhf7 [789.707251ms] Apr 4 18:33:09.525: INFO: Created: latency-svc-6nnj5 Apr 4 18:33:09.536: INFO: Got endpoints: latency-svc-6nnj5 [819.576143ms] Apr 4 18:33:09.554: INFO: Created: latency-svc-25r5b Apr 4 18:33:09.566: INFO: Got endpoints: latency-svc-25r5b [802.008429ms] Apr 4 18:33:09.609: INFO: Created: latency-svc-gpdqj Apr 4 18:33:09.626: INFO: Got endpoints: latency-svc-gpdqj [814.924966ms] Apr 4 18:33:09.693: INFO: Created: latency-svc-v5stl Apr 4 18:33:09.765: INFO: Got endpoints: latency-svc-v5stl [917.449173ms] Apr 4 18:33:09.776: INFO: Created: latency-svc-wftl2 Apr 4 18:33:10.118: INFO: Got endpoints: latency-svc-wftl2 [1.187831323s] Apr 4 18:33:10.215: INFO: Created: latency-svc-m7s9t Apr 4 18:33:10.412: INFO: Got endpoints: latency-svc-m7s9t [1.458162705s] Apr 4 18:33:10.414: INFO: Created: latency-svc-s76gj Apr 4 18:33:10.433: INFO: Got endpoints: latency-svc-s76gj [1.442842196s] Apr 4 18:33:10.646: INFO: Created: latency-svc-hw86x Apr 4 18:33:10.673: INFO: Got endpoints: latency-svc-hw86x [1.599290231s] Apr 4 18:33:10.730: INFO: Created: latency-svc-tv297 Apr 4 18:33:10.800: INFO: Got endpoints: latency-svc-tv297 [1.684291808s] Apr 4 18:33:10.802: INFO: Created: latency-svc-842zk Apr 4 18:33:10.816: INFO: Got endpoints: latency-svc-842zk [1.56287037s] Apr 4 18:33:10.863: INFO: Created: latency-svc-25vsk Apr 4 18:33:10.870: INFO: Got endpoints: latency-svc-25vsk [1.587149705s] Apr 4 18:33:10.898: INFO: Created: latency-svc-htztk Apr 4 18:33:10.944: INFO: Got endpoints: latency-svc-htztk [1.600505789s] Apr 4 18:33:10.945: INFO: Created: latency-svc-z4twj Apr 4 18:33:10.950: INFO: Got endpoints: latency-svc-z4twj [1.534294837s] Apr 4 18:33:10.971: INFO: Created: latency-svc-q9flp Apr 4 18:33:10.979: INFO: Got endpoints: latency-svc-q9flp [1.485258889s] Apr 4 18:33:10.995: INFO: Created: latency-svc-nvhrs Apr 4 18:33:11.013: INFO: Got endpoints: latency-svc-nvhrs [1.507178993s] Apr 4 18:33:11.030: INFO: Created: latency-svc-rxxrp Apr 4 18:33:11.100: INFO: Got endpoints: latency-svc-rxxrp [1.563885566s] Apr 4 18:33:11.101: INFO: Created: latency-svc-r4nfv Apr 4 18:33:11.130: INFO: Got endpoints: latency-svc-r4nfv [1.563529891s] Apr 4 18:33:11.183: INFO: Created: latency-svc-k5mfb Apr 4 18:33:11.196: INFO: Got endpoints: latency-svc-k5mfb [1.569251308s] Apr 4 18:33:11.245: INFO: Created: latency-svc-66qzl Apr 4 18:33:11.264: INFO: Got endpoints: latency-svc-66qzl [1.499202887s] Apr 4 18:33:11.265: INFO: Created: latency-svc-8fd4l Apr 4 18:33:11.301: INFO: Got endpoints: latency-svc-8fd4l [1.182502858s] Apr 4 18:33:11.326: INFO: Created: latency-svc-nzbsp Apr 4 18:33:11.343: INFO: Got endpoints: latency-svc-nzbsp [931.506629ms] Apr 4 18:33:11.381: INFO: Created: latency-svc-swxhw Apr 4 18:33:11.385: INFO: Got endpoints: latency-svc-swxhw [952.402341ms] Apr 4 18:33:11.403: INFO: Created: latency-svc-lt9nk Apr 4 18:33:11.421: INFO: Got endpoints: latency-svc-lt9nk [748.518817ms] Apr 4 18:33:11.444: INFO: Created: latency-svc-ptkpj Apr 4 18:33:11.468: INFO: Got endpoints: latency-svc-ptkpj [667.915471ms] Apr 4 18:33:11.525: INFO: Created: latency-svc-pzrtv Apr 4 18:33:11.548: INFO: Got endpoints: latency-svc-pzrtv [732.164257ms] Apr 4 18:33:11.549: INFO: Created: latency-svc-cdc4p Apr 4 18:33:11.565: INFO: Got endpoints: latency-svc-cdc4p [694.939099ms] Apr 4 18:33:11.582: INFO: Created: latency-svc-kpvjl Apr 4 18:33:11.597: INFO: Got endpoints: latency-svc-kpvjl [653.318321ms] Apr 4 18:33:11.615: INFO: Created: latency-svc-km2b8 Apr 4 18:33:11.670: INFO: Got endpoints: latency-svc-km2b8 [719.925844ms] Apr 4 18:33:11.675: INFO: Created: latency-svc-2pdj8 Apr 4 18:33:11.698: INFO: Got endpoints: latency-svc-2pdj8 [718.846567ms] Apr 4 18:33:11.722: INFO: Created: latency-svc-9s6mq Apr 4 18:33:11.735: INFO: Got endpoints: latency-svc-9s6mq [721.372618ms] Apr 4 18:33:11.753: INFO: Created: latency-svc-nm98m Apr 4 18:33:11.765: INFO: Got endpoints: latency-svc-nm98m [664.820381ms] Apr 4 18:33:11.806: INFO: Created: latency-svc-kkp7c Apr 4 18:33:11.825: INFO: Got endpoints: latency-svc-kkp7c [695.269867ms] Apr 4 18:33:11.865: INFO: Created: latency-svc-qpxgz Apr 4 18:33:11.879: INFO: Got endpoints: latency-svc-qpxgz [683.345423ms] Apr 4 18:33:11.902: INFO: Created: latency-svc-kxjds Apr 4 18:33:11.938: INFO: Got endpoints: latency-svc-kxjds [673.678001ms] Apr 4 18:33:11.949: INFO: Created: latency-svc-tmfvg Apr 4 18:33:11.966: INFO: Got endpoints: latency-svc-tmfvg [665.578089ms] Apr 4 18:33:11.985: INFO: Created: latency-svc-lsv7x Apr 4 18:33:12.003: INFO: Got endpoints: latency-svc-lsv7x [659.341422ms] Apr 4 18:33:12.016: INFO: Created: latency-svc-b22tp Apr 4 18:33:12.026: INFO: Got endpoints: latency-svc-b22tp [641.235958ms] Apr 4 18:33:12.070: INFO: Created: latency-svc-ks8jd Apr 4 18:33:12.094: INFO: Created: latency-svc-cwb4j Apr 4 18:33:12.094: INFO: Got endpoints: latency-svc-ks8jd [672.414109ms] Apr 4 18:33:12.117: INFO: Got endpoints: latency-svc-cwb4j [649.352332ms] Apr 4 18:33:12.142: INFO: Created: latency-svc-65nm6 Apr 4 18:33:12.152: INFO: Got endpoints: latency-svc-65nm6 [603.904848ms] Apr 4 18:33:12.226: INFO: Created: latency-svc-mqwss Apr 4 18:33:12.244: INFO: Got endpoints: latency-svc-mqwss [679.306997ms] Apr 4 18:33:12.245: INFO: Created: latency-svc-vhwfj Apr 4 18:33:12.262: INFO: Got endpoints: latency-svc-vhwfj [664.802701ms] Apr 4 18:33:12.287: INFO: Created: latency-svc-w95n9 Apr 4 18:33:12.298: INFO: Got endpoints: latency-svc-w95n9 [628.337003ms] Apr 4 18:33:12.381: INFO: Created: latency-svc-fnbx4 Apr 4 18:33:12.393: INFO: Got endpoints: latency-svc-fnbx4 [694.828071ms] Apr 4 18:33:12.394: INFO: Created: latency-svc-tqzzm Apr 4 18:33:12.406: INFO: Got endpoints: latency-svc-tqzzm [671.487154ms] Apr 4 18:33:12.460: INFO: Created: latency-svc-4vw92 Apr 4 18:33:12.549: INFO: Got endpoints: latency-svc-4vw92 [784.243944ms] Apr 4 18:33:12.561: INFO: Created: latency-svc-jnsxn Apr 4 18:33:12.586: INFO: Got endpoints: latency-svc-jnsxn [760.850306ms] Apr 4 18:33:12.604: INFO: Created: latency-svc-2qrkn Apr 4 18:33:12.615: INFO: Got endpoints: latency-svc-2qrkn [736.386757ms] Apr 4 18:33:12.681: INFO: Created: latency-svc-r9sjk Apr 4 18:33:12.706: INFO: Got endpoints: latency-svc-r9sjk [768.180463ms] Apr 4 18:33:12.706: INFO: Created: latency-svc-2w95b Apr 4 18:33:12.721: INFO: Got endpoints: latency-svc-2w95b [754.994271ms] Apr 4 18:33:12.759: INFO: Created: latency-svc-kwnbg Apr 4 18:33:12.775: INFO: Got endpoints: latency-svc-kwnbg [772.473804ms] Apr 4 18:33:12.808: INFO: Created: latency-svc-rvl6c Apr 4 18:33:12.839: INFO: Got endpoints: latency-svc-rvl6c [812.286829ms] Apr 4 18:33:12.874: INFO: Created: latency-svc-gkbfm Apr 4 18:33:12.895: INFO: Got endpoints: latency-svc-gkbfm [801.525002ms] Apr 4 18:33:12.944: INFO: Created: latency-svc-w2lkh Apr 4 18:33:12.949: INFO: Got endpoints: latency-svc-w2lkh [831.779892ms] Apr 4 18:33:12.964: INFO: Created: latency-svc-ctp96 Apr 4 18:33:12.973: INFO: Got endpoints: latency-svc-ctp96 [820.783146ms] Apr 4 18:33:13.000: INFO: Created: latency-svc-lr67s Apr 4 18:33:13.011: INFO: Got endpoints: latency-svc-lr67s [766.719304ms] Apr 4 18:33:13.031: INFO: Created: latency-svc-ggbj7 Apr 4 18:33:13.041: INFO: Got endpoints: latency-svc-ggbj7 [779.20353ms] Apr 4 18:33:13.082: INFO: Created: latency-svc-pc84v Apr 4 18:33:13.103: INFO: Got endpoints: latency-svc-pc84v [804.694755ms] Apr 4 18:33:13.103: INFO: Created: latency-svc-7kp9v Apr 4 18:33:13.113: INFO: Got endpoints: latency-svc-7kp9v [719.577301ms] Apr 4 18:33:13.150: INFO: Created: latency-svc-b4zqq Apr 4 18:33:13.167: INFO: Got endpoints: latency-svc-b4zqq [760.8508ms] Apr 4 18:33:13.219: INFO: Created: latency-svc-xfvx8 Apr 4 18:33:13.241: INFO: Got endpoints: latency-svc-xfvx8 [691.920004ms] Apr 4 18:33:13.243: INFO: Created: latency-svc-4bhnd Apr 4 18:33:13.256: INFO: Got endpoints: latency-svc-4bhnd [670.45067ms] Apr 4 18:33:13.279: INFO: Created: latency-svc-nkg2w Apr 4 18:33:13.290: INFO: Got endpoints: latency-svc-nkg2w [674.972052ms] Apr 4 18:33:13.308: INFO: Created: latency-svc-vx84f Apr 4 18:33:13.351: INFO: Got endpoints: latency-svc-vx84f [644.78306ms] Apr 4 18:33:13.353: INFO: Created: latency-svc-v9h66 Apr 4 18:33:13.363: INFO: Got endpoints: latency-svc-v9h66 [641.048856ms] Apr 4 18:33:13.406: INFO: Created: latency-svc-952bc Apr 4 18:33:13.435: INFO: Got endpoints: latency-svc-952bc [659.446789ms] Apr 4 18:33:13.537: INFO: Created: latency-svc-pt2tl Apr 4 18:33:13.548: INFO: Got endpoints: latency-svc-pt2tl [709.55399ms] Apr 4 18:33:13.585: INFO: Created: latency-svc-zmqc4 Apr 4 18:33:13.608: INFO: Got endpoints: latency-svc-zmqc4 [713.004202ms] Apr 4 18:33:13.627: INFO: Created: latency-svc-lz9t4 Apr 4 18:33:13.633: INFO: Got endpoints: latency-svc-lz9t4 [683.657231ms] Apr 4 18:33:13.676: INFO: Created: latency-svc-4tbn6 Apr 4 18:33:13.695: INFO: Created: latency-svc-mhppb Apr 4 18:33:13.695: INFO: Got endpoints: latency-svc-4tbn6 [721.894406ms] Apr 4 18:33:13.712: INFO: Got endpoints: latency-svc-mhppb [700.930528ms] Apr 4 18:33:13.731: INFO: Created: latency-svc-tt7wr Apr 4 18:33:13.742: INFO: Got endpoints: latency-svc-tt7wr [700.608529ms] Apr 4 18:33:13.755: INFO: Created: latency-svc-phj56 Apr 4 18:33:13.807: INFO: Got endpoints: latency-svc-phj56 [703.852277ms] Apr 4 18:33:13.833: INFO: Created: latency-svc-4vzsm Apr 4 18:33:13.868: INFO: Got endpoints: latency-svc-4vzsm [755.089225ms] Apr 4 18:33:13.967: INFO: Created: latency-svc-j5dvj Apr 4 18:33:13.982: INFO: Got endpoints: latency-svc-j5dvj [814.628975ms] Apr 4 18:33:14.149: INFO: Created: latency-svc-kqwqb Apr 4 18:33:14.921: INFO: Got endpoints: latency-svc-kqwqb [1.680007347s] Apr 4 18:33:14.923: INFO: Created: latency-svc-ktdfm Apr 4 18:33:15.094: INFO: Got endpoints: latency-svc-ktdfm [1.837811785s] Apr 4 18:33:15.179: INFO: Created: latency-svc-l5xtk Apr 4 18:33:15.250: INFO: Got endpoints: latency-svc-l5xtk [1.959523482s] Apr 4 18:33:15.297: INFO: Created: latency-svc-dz9rr Apr 4 18:33:15.745: INFO: Got endpoints: latency-svc-dz9rr [2.393945694s] Apr 4 18:33:15.782: INFO: Created: latency-svc-kdgkw Apr 4 18:33:15.908: INFO: Got endpoints: latency-svc-kdgkw [2.545877098s] Apr 4 18:33:15.962: INFO: Created: latency-svc-d72l5 Apr 4 18:33:15.974: INFO: Got endpoints: latency-svc-d72l5 [2.53934157s] Apr 4 18:33:15.999: INFO: Created: latency-svc-9ddzx Apr 4 18:33:16.052: INFO: Got endpoints: latency-svc-9ddzx [2.503666524s] Apr 4 18:33:16.074: INFO: Created: latency-svc-msrfr Apr 4 18:33:16.083: INFO: Got endpoints: latency-svc-msrfr [2.474755794s] Apr 4 18:33:16.101: INFO: Created: latency-svc-xkkkt Apr 4 18:33:16.113: INFO: Got endpoints: latency-svc-xkkkt [2.479970111s] Apr 4 18:33:16.130: INFO: Created: latency-svc-2c9vs Apr 4 18:33:16.143: INFO: Got endpoints: latency-svc-2c9vs [2.448553435s] Apr 4 18:33:16.184: INFO: Created: latency-svc-v82pg Apr 4 18:33:16.198: INFO: Got endpoints: latency-svc-v82pg [2.485710426s] Apr 4 18:33:16.269: INFO: Created: latency-svc-9q4pc Apr 4 18:33:16.309: INFO: Got endpoints: latency-svc-9q4pc [2.567388908s] Apr 4 18:33:16.322: INFO: Created: latency-svc-zgsz4 Apr 4 18:33:16.335: INFO: Got endpoints: latency-svc-zgsz4 [2.528688084s] Apr 4 18:33:16.353: INFO: Created: latency-svc-kf6vb Apr 4 18:33:16.366: INFO: Got endpoints: latency-svc-kf6vb [2.497451101s] Apr 4 18:33:16.395: INFO: Created: latency-svc-b68mc Apr 4 18:33:16.454: INFO: Got endpoints: latency-svc-b68mc [2.471701046s] Apr 4 18:33:16.474: INFO: Created: latency-svc-kmcmm Apr 4 18:33:16.495: INFO: Got endpoints: latency-svc-kmcmm [1.573546145s] Apr 4 18:33:16.591: INFO: Created: latency-svc-nr2zx Apr 4 18:33:16.623: INFO: Created: latency-svc-6g2dd Apr 4 18:33:16.623: INFO: Got endpoints: latency-svc-nr2zx [1.528738143s] Apr 4 18:33:16.633: INFO: Got endpoints: latency-svc-6g2dd [1.382599759s] Apr 4 18:33:16.654: INFO: Created: latency-svc-6twhq Apr 4 18:33:16.669: INFO: Got endpoints: latency-svc-6twhq [923.469054ms] Apr 4 18:33:16.690: INFO: Created: latency-svc-6lq99 Apr 4 18:33:16.765: INFO: Got endpoints: latency-svc-6lq99 [856.036461ms] Apr 4 18:33:16.766: INFO: Created: latency-svc-gbnn8 Apr 4 18:33:16.776: INFO: Got endpoints: latency-svc-gbnn8 [802.109804ms] Apr 4 18:33:16.834: INFO: Created: latency-svc-ptmcz Apr 4 18:33:16.850: INFO: Got endpoints: latency-svc-ptmcz [798.520383ms] Apr 4 18:33:16.909: INFO: Created: latency-svc-2zsnx Apr 4 18:33:16.928: INFO: Created: latency-svc-kqzx5 Apr 4 18:33:16.929: INFO: Got endpoints: latency-svc-2zsnx [845.305994ms] Apr 4 18:33:16.940: INFO: Got endpoints: latency-svc-kqzx5 [827.363308ms] Apr 4 18:33:16.959: INFO: Created: latency-svc-frqc5 Apr 4 18:33:16.984: INFO: Got endpoints: latency-svc-frqc5 [840.275765ms] Apr 4 18:33:17.009: INFO: Created: latency-svc-dhqtj Apr 4 18:33:17.046: INFO: Got endpoints: latency-svc-dhqtj [848.251337ms] Apr 4 18:33:17.074: INFO: Created: latency-svc-tswvn Apr 4 18:33:17.090: INFO: Got endpoints: latency-svc-tswvn [780.717748ms] Apr 4 18:33:17.122: INFO: Created: latency-svc-nlm9h Apr 4 18:33:17.190: INFO: Got endpoints: latency-svc-nlm9h [854.762708ms] Apr 4 18:33:17.211: INFO: Created: latency-svc-fvz6w Apr 4 18:33:17.232: INFO: Got endpoints: latency-svc-fvz6w [866.353575ms] Apr 4 18:33:17.254: INFO: Created: latency-svc-9hxnf Apr 4 18:33:17.280: INFO: Got endpoints: latency-svc-9hxnf [826.557023ms] Apr 4 18:33:17.321: INFO: Created: latency-svc-2kj22 Apr 4 18:33:17.345: INFO: Got endpoints: latency-svc-2kj22 [850.280569ms] Apr 4 18:33:17.374: INFO: Created: latency-svc-rqcdl Apr 4 18:33:17.388: INFO: Got endpoints: latency-svc-rqcdl [764.47662ms] Apr 4 18:33:17.472: INFO: Created: latency-svc-rc4wm Apr 4 18:33:17.494: INFO: Created: latency-svc-hdgfz Apr 4 18:33:17.494: INFO: Got endpoints: latency-svc-rc4wm [861.859002ms] Apr 4 18:33:17.543: INFO: Got endpoints: latency-svc-hdgfz [874.531059ms] Apr 4 18:33:17.597: INFO: Created: latency-svc-25z8c Apr 4 18:33:17.603: INFO: Got endpoints: latency-svc-25z8c [838.691811ms] Apr 4 18:33:17.645: INFO: Created: latency-svc-df6nz Apr 4 18:33:17.683: INFO: Got endpoints: latency-svc-df6nz [907.125784ms] Apr 4 18:33:17.765: INFO: Created: latency-svc-s745z Apr 4 18:33:17.833: INFO: Created: latency-svc-j8cbg Apr 4 18:33:17.833: INFO: Got endpoints: latency-svc-s745z [982.292848ms] Apr 4 18:33:17.860: INFO: Got endpoints: latency-svc-j8cbg [931.043916ms] Apr 4 18:33:17.914: INFO: Created: latency-svc-64nbf Apr 4 18:33:17.923: INFO: Got endpoints: latency-svc-64nbf [982.399112ms] Apr 4 18:33:17.946: INFO: Created: latency-svc-4dclf Apr 4 18:33:17.971: INFO: Got endpoints: latency-svc-4dclf [986.72835ms] Apr 4 18:33:18.308: INFO: Created: latency-svc-5r25c Apr 4 18:33:18.343: INFO: Got endpoints: latency-svc-5r25c [1.29663008s] Apr 4 18:33:18.526: INFO: Created: latency-svc-vbv5g Apr 4 18:33:18.556: INFO: Got endpoints: latency-svc-vbv5g [1.465272736s] Apr 4 18:33:18.613: INFO: Created: latency-svc-l246b Apr 4 18:33:18.723: INFO: Got endpoints: latency-svc-l246b [1.532602131s] Apr 4 18:33:18.793: INFO: Created: latency-svc-7tghc Apr 4 18:33:18.944: INFO: Got endpoints: latency-svc-7tghc [1.7122467s] Apr 4 18:33:18.956: INFO: Created: latency-svc-gv8s2 Apr 4 18:33:18.981: INFO: Got endpoints: latency-svc-gv8s2 [1.700733814s] Apr 4 18:33:19.030: INFO: Created: latency-svc-g74gv Apr 4 18:33:19.088: INFO: Got endpoints: latency-svc-g74gv [1.742716132s] Apr 4 18:33:19.314: INFO: Created: latency-svc-5k2qd Apr 4 18:33:19.490: INFO: Got endpoints: latency-svc-5k2qd [2.102449133s] Apr 4 18:33:19.615: INFO: Created: latency-svc-r5jdt Apr 4 18:33:19.982: INFO: Got endpoints: latency-svc-r5jdt [2.487087096s] Apr 4 18:33:20.244: INFO: Created: latency-svc-4bft6 Apr 4 18:33:20.588: INFO: Got endpoints: latency-svc-4bft6 [3.044213955s] Apr 4 18:33:20.588: INFO: Created: latency-svc-8bmlb Apr 4 18:33:20.969: INFO: Got endpoints: latency-svc-8bmlb [3.365189371s] Apr 4 18:33:20.971: INFO: Created: latency-svc-mpfgx Apr 4 18:33:20.983: INFO: Got endpoints: latency-svc-mpfgx [3.299349633s] Apr 4 18:33:21.184: INFO: Created: latency-svc-4zh7z Apr 4 18:33:21.276: INFO: Got endpoints: latency-svc-4zh7z [3.443586989s] Apr 4 18:33:21.347: INFO: Created: latency-svc-wxktw Apr 4 18:33:21.372: INFO: Got endpoints: latency-svc-wxktw [3.512031884s] Apr 4 18:33:21.413: INFO: Created: latency-svc-6rr7d Apr 4 18:33:21.432: INFO: Got endpoints: latency-svc-6rr7d [3.509476391s] Apr 4 18:33:21.463: INFO: Created: latency-svc-vk8sl Apr 4 18:33:21.480: INFO: Got endpoints: latency-svc-vk8sl [3.509780688s] Apr 4 18:33:21.537: INFO: Created: latency-svc-vb6p6 Apr 4 18:33:21.544: INFO: Got endpoints: latency-svc-vb6p6 [3.200710329s] Apr 4 18:33:21.588: INFO: Created: latency-svc-8gk26 Apr 4 18:33:21.610: INFO: Got endpoints: latency-svc-8gk26 [3.054474776s] Apr 4 18:33:21.631: INFO: Created: latency-svc-2kmwb Apr 4 18:33:21.681: INFO: Got endpoints: latency-svc-2kmwb [2.958298775s] Apr 4 18:33:21.714: INFO: Created: latency-svc-tsb9c Apr 4 18:33:21.754: INFO: Got endpoints: latency-svc-tsb9c [2.809585925s] Apr 4 18:33:21.813: INFO: Created: latency-svc-hgrsz Apr 4 18:33:21.835: INFO: Created: latency-svc-cpnq5 Apr 4 18:33:21.835: INFO: Got endpoints: latency-svc-hgrsz [2.85368107s] Apr 4 18:33:21.849: INFO: Got endpoints: latency-svc-cpnq5 [2.761509146s] Apr 4 18:33:21.957: INFO: Created: latency-svc-shdjl Apr 4 18:33:21.978: INFO: Got endpoints: latency-svc-shdjl [2.488377317s] Apr 4 18:33:21.979: INFO: Created: latency-svc-5dtlb Apr 4 18:33:21.995: INFO: Got endpoints: latency-svc-5dtlb [2.013692343s] Apr 4 18:33:22.015: INFO: Created: latency-svc-jh4xt Apr 4 18:33:22.025: INFO: Got endpoints: latency-svc-jh4xt [1.437532872s] Apr 4 18:33:22.075: INFO: Created: latency-svc-d6bpf Apr 4 18:33:22.100: INFO: Got endpoints: latency-svc-d6bpf [1.13128767s] Apr 4 18:33:22.131: INFO: Created: latency-svc-mvgd2 Apr 4 18:33:22.145: INFO: Got endpoints: latency-svc-mvgd2 [1.162178977s] Apr 4 18:33:22.167: INFO: Created: latency-svc-l74c7 Apr 4 18:33:22.189: INFO: Got endpoints: latency-svc-l74c7 [913.056188ms] Apr 4 18:33:22.204: INFO: Created: latency-svc-2z6bm Apr 4 18:33:22.226: INFO: Got endpoints: latency-svc-2z6bm [854.276453ms] Apr 4 18:33:22.249: INFO: Created: latency-svc-4cp2m Apr 4 18:33:22.259: INFO: Got endpoints: latency-svc-4cp2m [826.843711ms] Apr 4 18:33:22.285: INFO: Created: latency-svc-vsm64 Apr 4 18:33:22.309: INFO: Got endpoints: latency-svc-vsm64 [828.721757ms] Apr 4 18:33:22.329: INFO: Created: latency-svc-t54x8 Apr 4 18:33:22.353: INFO: Got endpoints: latency-svc-t54x8 [809.549891ms] Apr 4 18:33:22.370: INFO: Created: latency-svc-85mpj Apr 4 18:33:22.382: INFO: Got endpoints: latency-svc-85mpj [772.225443ms] Apr 4 18:33:22.400: INFO: Created: latency-svc-cmdxh Apr 4 18:33:22.454: INFO: Got endpoints: latency-svc-cmdxh [772.816904ms] Apr 4 18:33:22.458: INFO: Created: latency-svc-jfgbm Apr 4 18:33:22.621: INFO: Got endpoints: latency-svc-jfgbm [867.105855ms] Apr 4 18:33:22.631: INFO: Created: latency-svc-67zwx Apr 4 18:33:22.683: INFO: Got endpoints: latency-svc-67zwx [848.052787ms] Apr 4 18:33:22.796: INFO: Created: latency-svc-slkb9 Apr 4 18:33:22.802: INFO: Got endpoints: latency-svc-slkb9 [952.59887ms] Apr 4 18:33:23.814: INFO: Created: latency-svc-xgxv6 Apr 4 18:33:24.341: INFO: Got endpoints: latency-svc-xgxv6 [2.362040464s] Apr 4 18:33:24.370: INFO: Created: latency-svc-7d4dj Apr 4 18:33:24.397: INFO: Got endpoints: latency-svc-7d4dj [2.401764679s] Apr 4 18:33:24.513: INFO: Created: latency-svc-f47v9 Apr 4 18:33:24.719: INFO: Got endpoints: latency-svc-f47v9 [2.694209215s] Apr 4 18:33:24.803: INFO: Created: latency-svc-xwljg Apr 4 18:33:24.819: INFO: Got endpoints: latency-svc-xwljg [2.718901101s] Apr 4 18:33:24.840: INFO: Created: latency-svc-w4x9n Apr 4 18:33:24.846: INFO: Got endpoints: latency-svc-w4x9n [2.7011129s] Apr 4 18:33:24.866: INFO: Created: latency-svc-cstzf Apr 4 18:33:24.888: INFO: Got endpoints: latency-svc-cstzf [2.698445273s] Apr 4 18:33:24.951: INFO: Created: latency-svc-8b2gn Apr 4 18:33:24.958: INFO: Got endpoints: latency-svc-8b2gn [2.731749549s] Apr 4 18:33:24.991: INFO: Created: latency-svc-zrdgt Apr 4 18:33:25.000: INFO: Got endpoints: latency-svc-zrdgt [2.740480346s] Apr 4 18:33:25.014: INFO: Created: latency-svc-qwccj Apr 4 18:33:25.018: INFO: Got endpoints: latency-svc-qwccj [2.708679579s] Apr 4 18:33:25.039: INFO: Created: latency-svc-4c6n5 Apr 4 18:33:25.100: INFO: Got endpoints: latency-svc-4c6n5 [2.746795387s] Apr 4 18:33:25.105: INFO: Created: latency-svc-8hqh4 Apr 4 18:33:25.162: INFO: Got endpoints: latency-svc-8hqh4 [2.779373269s] Apr 4 18:33:25.258: INFO: Created: latency-svc-wqm8v Apr 4 18:33:25.264: INFO: Got endpoints: latency-svc-wqm8v [2.809449638s] Apr 4 18:33:25.902: INFO: Created: latency-svc-mc9jq Apr 4 18:33:25.916: INFO: Got endpoints: latency-svc-mc9jq [3.295055137s] Apr 4 18:33:26.118: INFO: Created: latency-svc-bn87l Apr 4 18:33:26.125: INFO: Got endpoints: latency-svc-bn87l [3.442528431s] Apr 4 18:33:26.142: INFO: Created: latency-svc-892x7 Apr 4 18:33:26.821: INFO: Got endpoints: latency-svc-892x7 [4.01885931s] Apr 4 18:33:27.112: INFO: Created: latency-svc-spcfk Apr 4 18:33:27.128: INFO: Got endpoints: latency-svc-spcfk [2.787118078s] Apr 4 18:33:27.203: INFO: Created: latency-svc-2kk5b Apr 4 18:33:27.268: INFO: Got endpoints: latency-svc-2kk5b [2.870658528s] Apr 4 18:33:27.292: INFO: Created: latency-svc-slcvv Apr 4 18:33:27.308: INFO: Got endpoints: latency-svc-slcvv [2.588423372s] Apr 4 18:33:27.330: INFO: Created: latency-svc-qsq4w Apr 4 18:33:27.358: INFO: Got endpoints: latency-svc-qsq4w [2.539376481s] Apr 4 18:33:27.423: INFO: Created: latency-svc-bctpz Apr 4 18:33:27.466: INFO: Got endpoints: latency-svc-bctpz [2.619448087s] Apr 4 18:33:27.466: INFO: Created: latency-svc-z6r9t Apr 4 18:33:27.479: INFO: Got endpoints: latency-svc-z6r9t [2.591068008s] Apr 4 18:33:27.508: INFO: Created: latency-svc-knvgw Apr 4 18:33:27.515: INFO: Got endpoints: latency-svc-knvgw [2.556980601s] Apr 4 18:33:27.579: INFO: Created: latency-svc-nqzcr Apr 4 18:33:27.605: INFO: Got endpoints: latency-svc-nqzcr [2.605602823s] Apr 4 18:33:28.172: INFO: Created: latency-svc-bv4s5 Apr 4 18:33:28.180: INFO: Got endpoints: latency-svc-bv4s5 [3.162169824s] Apr 4 18:33:28.347: INFO: Created: latency-svc-t6gq7 Apr 4 18:33:28.360: INFO: Got endpoints: latency-svc-t6gq7 [3.259512575s] Apr 4 18:33:28.399: INFO: Created: latency-svc-pc92n Apr 4 18:33:28.414: INFO: Got endpoints: latency-svc-pc92n [3.252122111s] Apr 4 18:33:28.493: INFO: Created: latency-svc-7b9gm Apr 4 18:33:28.512: INFO: Got endpoints: latency-svc-7b9gm [3.248802217s] Apr 4 18:33:28.513: INFO: Created: latency-svc-v7tkh Apr 4 18:33:28.544: INFO: Got endpoints: latency-svc-v7tkh [2.627812385s] Apr 4 18:33:28.579: INFO: Created: latency-svc-bvvkk Apr 4 18:33:28.609: INFO: Got endpoints: latency-svc-bvvkk [2.483637865s] Apr 4 18:33:28.627: INFO: Created: latency-svc-4r5td Apr 4 18:33:28.644: INFO: Got endpoints: latency-svc-4r5td [1.822942574s] Apr 4 18:33:28.663: INFO: Created: latency-svc-8sx2x Apr 4 18:33:28.679: INFO: Got endpoints: latency-svc-8sx2x [1.551254189s] Apr 4 18:33:29.047: INFO: Created: latency-svc-v6gxf Apr 4 18:33:29.083: INFO: Got endpoints: latency-svc-v6gxf [1.815444616s] Apr 4 18:33:29.083: INFO: Created: latency-svc-m5p7d Apr 4 18:33:29.092: INFO: Got endpoints: latency-svc-m5p7d [1.784353639s] Apr 4 18:33:29.119: INFO: Created: latency-svc-dz6tr Apr 4 18:33:29.127: INFO: Got endpoints: latency-svc-dz6tr [1.768537606s] Apr 4 18:33:29.196: INFO: Created: latency-svc-fr9k8 Apr 4 18:33:29.202: INFO: Got endpoints: latency-svc-fr9k8 [1.736044492s] Apr 4 18:33:29.202: INFO: Latencies: [47.766086ms 94.984275ms 131.114571ms 213.253689ms 237.059378ms 273.097814ms 357.049715ms 399.263042ms 536.031813ms 566.242416ms 603.904848ms 626.845049ms 628.337003ms 641.048856ms 641.235958ms 644.78306ms 649.352332ms 653.318321ms 659.341422ms 659.446789ms 664.802701ms 664.820381ms 665.578089ms 667.915471ms 670.45067ms 671.487154ms 672.414109ms 673.678001ms 674.972052ms 679.306997ms 683.345423ms 683.657231ms 691.920004ms 694.828071ms 694.939099ms 695.269867ms 698.746716ms 700.608529ms 700.930528ms 703.852277ms 709.55399ms 713.004202ms 718.846567ms 719.577301ms 719.925844ms 721.372618ms 721.894406ms 732.164257ms 736.386757ms 748.518817ms 754.994271ms 755.089225ms 760.850306ms 760.8508ms 764.47662ms 766.719304ms 768.180463ms 772.225443ms 772.473804ms 772.816904ms 777.803651ms 779.20353ms 780.717748ms 784.243944ms 789.707251ms 798.520383ms 801.525002ms 802.008429ms 802.109804ms 804.694755ms 809.549891ms 812.286829ms 814.628975ms 814.924966ms 819.576143ms 820.783146ms 826.557023ms 826.843711ms 827.363308ms 828.721757ms 831.779892ms 838.691811ms 840.275765ms 845.305994ms 848.052787ms 848.251337ms 850.280569ms 854.276453ms 854.762708ms 856.036461ms 861.859002ms 866.353575ms 867.105855ms 874.531059ms 907.125784ms 913.056188ms 917.449173ms 923.469054ms 931.043916ms 931.506629ms 952.402341ms 952.59887ms 982.292848ms 982.399112ms 986.72835ms 1.13128767s 1.162178977s 1.182502858s 1.187831323s 1.29663008s 1.382599759s 1.437532872s 1.442842196s 1.458162705s 1.465272736s 1.485258889s 1.499202887s 1.507178993s 1.528738143s 1.532602131s 1.534294837s 1.551254189s 1.56287037s 1.563529891s 1.563885566s 1.569251308s 1.573546145s 1.587149705s 1.599290231s 1.600505789s 1.680007347s 1.684291808s 1.700733814s 1.7122467s 1.736044492s 1.742716132s 1.768537606s 1.784353639s 1.815444616s 1.822942574s 1.837811785s 1.959523482s 2.013692343s 2.102449133s 2.362040464s 2.393945694s 2.401764679s 2.448553435s 2.471701046s 2.474755794s 2.479970111s 2.483637865s 2.485710426s 2.487087096s 2.488377317s 2.497451101s 2.503666524s 2.528688084s 2.53934157s 2.539376481s 2.545877098s 2.556980601s 2.567388908s 2.588423372s 2.591068008s 2.605602823s 2.619448087s 2.627812385s 2.694209215s 2.698445273s 2.7011129s 2.708679579s 2.718901101s 2.731749549s 2.740480346s 2.746795387s 2.761509146s 2.779373269s 2.787118078s 2.809449638s 2.809585925s 2.85368107s 2.870658528s 2.958298775s 3.044213955s 3.054474776s 3.162169824s 3.200710329s 3.248802217s 3.252122111s 3.259512575s 3.295055137s 3.299349633s 3.365189371s 3.442528431s 3.443586989s 3.509476391s 3.509780688s 3.512031884s 4.01885931s] Apr 4 18:33:29.202: INFO: 50 %ile: 952.402341ms Apr 4 18:33:29.202: INFO: 90 %ile: 2.809585925s Apr 4 18:33:29.202: INFO: 99 %ile: 3.512031884s Apr 4 18:33:29.202: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:33:29.202: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-9124" for this suite. • [SLOW TEST:40.002 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":281,"completed":207,"skipped":3494,"failed":0} S ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:33:29.215: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:52 [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating replication controller my-hostname-basic-4b3c74e5-f974-4d8f-bce2-b14371f199f0 Apr 4 18:33:29.407: INFO: Pod name my-hostname-basic-4b3c74e5-f974-4d8f-bce2-b14371f199f0: Found 0 pods out of 1 Apr 4 18:33:34.412: INFO: Pod name my-hostname-basic-4b3c74e5-f974-4d8f-bce2-b14371f199f0: Found 1 pods out of 1 Apr 4 18:33:34.412: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-4b3c74e5-f974-4d8f-bce2-b14371f199f0" are running Apr 4 18:33:42.505: INFO: Pod "my-hostname-basic-4b3c74e5-f974-4d8f-bce2-b14371f199f0-rr954" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-04 18:33:29 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-04 18:33:29 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-4b3c74e5-f974-4d8f-bce2-b14371f199f0]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-04 18:33:29 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-4b3c74e5-f974-4d8f-bce2-b14371f199f0]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-04 18:33:29 +0000 UTC Reason: Message:}]) Apr 4 18:33:42.505: INFO: Trying to dial the pod Apr 4 18:33:47.549: INFO: Controller my-hostname-basic-4b3c74e5-f974-4d8f-bce2-b14371f199f0: Got expected result from replica 1 [my-hostname-basic-4b3c74e5-f974-4d8f-bce2-b14371f199f0-rr954]: "my-hostname-basic-4b3c74e5-f974-4d8f-bce2-b14371f199f0-rr954", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:33:47.549: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-4944" for this suite. • [SLOW TEST:18.405 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":281,"completed":208,"skipped":3495,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:33:47.620: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:33:47.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-497" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":281,"completed":209,"skipped":3504,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:33:47.786: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on tmpfs Apr 4 18:33:48.020: INFO: Waiting up to 5m0s for pod "pod-fcc019e2-ea6c-4934-a04d-2e6259fdb720" in namespace "emptydir-9804" to be "Succeeded or Failed" Apr 4 18:33:48.032: INFO: Pod "pod-fcc019e2-ea6c-4934-a04d-2e6259fdb720": Phase="Pending", Reason="", readiness=false. Elapsed: 11.951354ms Apr 4 18:33:50.202: INFO: Pod "pod-fcc019e2-ea6c-4934-a04d-2e6259fdb720": Phase="Pending", Reason="", readiness=false. Elapsed: 2.181725183s Apr 4 18:33:52.508: INFO: Pod "pod-fcc019e2-ea6c-4934-a04d-2e6259fdb720": Phase="Pending", Reason="", readiness=false. Elapsed: 4.487815322s Apr 4 18:33:55.006: INFO: Pod "pod-fcc019e2-ea6c-4934-a04d-2e6259fdb720": Phase="Pending", Reason="", readiness=false. Elapsed: 6.985026059s Apr 4 18:33:57.748: INFO: Pod "pod-fcc019e2-ea6c-4934-a04d-2e6259fdb720": Phase="Pending", Reason="", readiness=false. Elapsed: 9.728023796s Apr 4 18:33:59.909: INFO: Pod "pod-fcc019e2-ea6c-4934-a04d-2e6259fdb720": Phase="Pending", Reason="", readiness=false. Elapsed: 11.888961018s Apr 4 18:34:01.938: INFO: Pod "pod-fcc019e2-ea6c-4934-a04d-2e6259fdb720": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.917964518s STEP: Saw pod success Apr 4 18:34:01.938: INFO: Pod "pod-fcc019e2-ea6c-4934-a04d-2e6259fdb720" satisfied condition "Succeeded or Failed" Apr 4 18:34:01.944: INFO: Trying to get logs from node latest-worker pod pod-fcc019e2-ea6c-4934-a04d-2e6259fdb720 container test-container: STEP: delete the pod Apr 4 18:34:02.004: INFO: Waiting for pod pod-fcc019e2-ea6c-4934-a04d-2e6259fdb720 to disappear Apr 4 18:34:02.010: INFO: Pod pod-fcc019e2-ea6c-4934-a04d-2e6259fdb720 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:34:02.010: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9804" for this suite. • [SLOW TEST:14.236 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:43 should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":281,"completed":210,"skipped":3515,"failed":0} [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:34:02.022: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Apr 4 18:34:02.148: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2693 /api/v1/namespaces/watch-2693/configmaps/e2e-watch-test-configmap-a 5f9cd4c3-6448-4cbd-b74b-3ea6d3b01a98 5410796 0 2020-04-04 18:34:02 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Apr 4 18:34:02.148: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2693 /api/v1/namespaces/watch-2693/configmaps/e2e-watch-test-configmap-a 5f9cd4c3-6448-4cbd-b74b-3ea6d3b01a98 5410796 0 2020-04-04 18:34:02 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A and ensuring the correct watchers observe the notification Apr 4 18:34:12.249: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2693 /api/v1/namespaces/watch-2693/configmaps/e2e-watch-test-configmap-a 5f9cd4c3-6448-4cbd-b74b-3ea6d3b01a98 5411136 0 2020-04-04 18:34:02 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Apr 4 18:34:12.249: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2693 /api/v1/namespaces/watch-2693/configmaps/e2e-watch-test-configmap-a 5f9cd4c3-6448-4cbd-b74b-3ea6d3b01a98 5411136 0 2020-04-04 18:34:02 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Apr 4 18:34:22.254: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2693 /api/v1/namespaces/watch-2693/configmaps/e2e-watch-test-configmap-a 5f9cd4c3-6448-4cbd-b74b-3ea6d3b01a98 5411171 0 2020-04-04 18:34:02 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Apr 4 18:34:22.254: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2693 /api/v1/namespaces/watch-2693/configmaps/e2e-watch-test-configmap-a 5f9cd4c3-6448-4cbd-b74b-3ea6d3b01a98 5411171 0 2020-04-04 18:34:02 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap A and ensuring the correct watchers observe the notification Apr 4 18:34:32.260: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2693 /api/v1/namespaces/watch-2693/configmaps/e2e-watch-test-configmap-a 5f9cd4c3-6448-4cbd-b74b-3ea6d3b01a98 5411198 0 2020-04-04 18:34:02 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Apr 4 18:34:32.260: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-2693 /api/v1/namespaces/watch-2693/configmaps/e2e-watch-test-configmap-a 5f9cd4c3-6448-4cbd-b74b-3ea6d3b01a98 5411198 0 2020-04-04 18:34:02 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Apr 4 18:34:42.267: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-2693 /api/v1/namespaces/watch-2693/configmaps/e2e-watch-test-configmap-b edbf7b5e-4c40-4d8a-8223-232e0aee1c19 5411226 0 2020-04-04 18:34:42 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Apr 4 18:34:42.267: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-2693 /api/v1/namespaces/watch-2693/configmaps/e2e-watch-test-configmap-b edbf7b5e-4c40-4d8a-8223-232e0aee1c19 5411226 0 2020-04-04 18:34:42 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap B and ensuring the correct watchers observe the notification Apr 4 18:34:52.273: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-2693 /api/v1/namespaces/watch-2693/configmaps/e2e-watch-test-configmap-b edbf7b5e-4c40-4d8a-8223-232e0aee1c19 5411254 0 2020-04-04 18:34:42 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Apr 4 18:34:52.273: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-2693 /api/v1/namespaces/watch-2693/configmaps/e2e-watch-test-configmap-b edbf7b5e-4c40-4d8a-8223-232e0aee1c19 5411254 0 2020-04-04 18:34:42 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:35:02.273: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-2693" for this suite. • [SLOW TEST:60.259 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":281,"completed":211,"skipped":3515,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:35:02.282: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Apr 4 18:35:02.680: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"92bbd481-2ce5-4e3b-bbf6-a9fc8e517819", Controller:(*bool)(0xc004c445ba), BlockOwnerDeletion:(*bool)(0xc004c445bb)}} Apr 4 18:35:02.754: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"6944b6d7-3ef4-4acc-9adf-4c1bc1d9c1e2", Controller:(*bool)(0xc0031be4f2), BlockOwnerDeletion:(*bool)(0xc0031be4f3)}} Apr 4 18:35:02.758: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"9bde479a-6218-4d7d-8719-76fddcccd6de", Controller:(*bool)(0xc004c44792), BlockOwnerDeletion:(*bool)(0xc004c44793)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:35:07.944: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9765" for this suite. • [SLOW TEST:5.670 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":281,"completed":212,"skipped":3575,"failed":0} SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:35:07.952: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir volume type on node default medium Apr 4 18:35:08.011: INFO: Waiting up to 5m0s for pod "pod-4717f29e-95e3-483a-8e30-1e33e21a4ff0" in namespace "emptydir-4767" to be "Succeeded or Failed" Apr 4 18:35:08.016: INFO: Pod "pod-4717f29e-95e3-483a-8e30-1e33e21a4ff0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.571058ms Apr 4 18:35:10.019: INFO: Pod "pod-4717f29e-95e3-483a-8e30-1e33e21a4ff0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007855963s Apr 4 18:35:12.044: INFO: Pod "pod-4717f29e-95e3-483a-8e30-1e33e21a4ff0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033270545s Apr 4 18:35:14.048: INFO: Pod "pod-4717f29e-95e3-483a-8e30-1e33e21a4ff0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.036929418s STEP: Saw pod success Apr 4 18:35:14.048: INFO: Pod "pod-4717f29e-95e3-483a-8e30-1e33e21a4ff0" satisfied condition "Succeeded or Failed" Apr 4 18:35:14.051: INFO: Trying to get logs from node latest-worker2 pod pod-4717f29e-95e3-483a-8e30-1e33e21a4ff0 container test-container: STEP: delete the pod Apr 4 18:35:14.092: INFO: Waiting for pod pod-4717f29e-95e3-483a-8e30-1e33e21a4ff0 to disappear Apr 4 18:35:14.109: INFO: Pod pod-4717f29e-95e3-483a-8e30-1e33e21a4ff0 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:35:14.109: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4767" for this suite. • [SLOW TEST:6.164 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:43 volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":281,"completed":213,"skipped":3582,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:35:14.116: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 4 18:35:16.044: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 4 18:35:19.390: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721622116, loc:(*time.Location)(0x7bcb460)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721622116, loc:(*time.Location)(0x7bcb460)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721622116, loc:(*time.Location)(0x7bcb460)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721622116, loc:(*time.Location)(0x7bcb460)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 4 18:35:21.428: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721622116, loc:(*time.Location)(0x7bcb460)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721622116, loc:(*time.Location)(0x7bcb460)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721622116, loc:(*time.Location)(0x7bcb460)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721622116, loc:(*time.Location)(0x7bcb460)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 4 18:35:24.417: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Apr 4 18:35:24.420: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-1956-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:35:25.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7868" for this suite. STEP: Destroying namespace "webhook-7868-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:11.458 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":281,"completed":214,"skipped":3588,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:35:25.575: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:171 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating server pod server in namespace prestop-5643 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-5643 STEP: Deleting pre-stop pod Apr 4 18:35:58.805: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:35:58.809: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-5643" for this suite. • [SLOW TEST:33.280 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":281,"completed":215,"skipped":3632,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:35:58.855: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-45bb3f1d-611b-49d3-bf4e-d31f2dcd57ba STEP: Creating a pod to test consume configMaps Apr 4 18:35:59.072: INFO: Waiting up to 5m0s for pod "pod-configmaps-b07b28ad-b51d-40c7-b305-029405974ce9" in namespace "configmap-1780" to be "Succeeded or Failed" Apr 4 18:35:59.325: INFO: Pod "pod-configmaps-b07b28ad-b51d-40c7-b305-029405974ce9": Phase="Pending", Reason="", readiness=false. Elapsed: 253.337111ms Apr 4 18:36:01.329: INFO: Pod "pod-configmaps-b07b28ad-b51d-40c7-b305-029405974ce9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.257217223s Apr 4 18:36:03.333: INFO: Pod "pod-configmaps-b07b28ad-b51d-40c7-b305-029405974ce9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.260752183s STEP: Saw pod success Apr 4 18:36:03.333: INFO: Pod "pod-configmaps-b07b28ad-b51d-40c7-b305-029405974ce9" satisfied condition "Succeeded or Failed" Apr 4 18:36:03.335: INFO: Trying to get logs from node latest-worker pod pod-configmaps-b07b28ad-b51d-40c7-b305-029405974ce9 container configmap-volume-test: STEP: delete the pod Apr 4 18:36:03.383: INFO: Waiting for pod pod-configmaps-b07b28ad-b51d-40c7-b305-029405974ce9 to disappear Apr 4 18:36:03.387: INFO: Pod pod-configmaps-b07b28ad-b51d-40c7-b305-029405974ce9 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:36:03.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1780" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":281,"completed":216,"skipped":3652,"failed":0} SSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:36:03.394: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-5576c3ba-bd9d-45ac-b47f-2c295a572806 STEP: Creating a pod to test consume configMaps Apr 4 18:36:04.042: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e8cf993d-f823-4d9e-9da1-cbbc0d2aa95d" in namespace "projected-5608" to be "Succeeded or Failed" Apr 4 18:36:04.112: INFO: Pod "pod-projected-configmaps-e8cf993d-f823-4d9e-9da1-cbbc0d2aa95d": Phase="Pending", Reason="", readiness=false. Elapsed: 69.996959ms Apr 4 18:36:06.137: INFO: Pod "pod-projected-configmaps-e8cf993d-f823-4d9e-9da1-cbbc0d2aa95d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.094651942s Apr 4 18:36:08.141: INFO: Pod "pod-projected-configmaps-e8cf993d-f823-4d9e-9da1-cbbc0d2aa95d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.09847375s Apr 4 18:36:10.145: INFO: Pod "pod-projected-configmaps-e8cf993d-f823-4d9e-9da1-cbbc0d2aa95d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.102675333s STEP: Saw pod success Apr 4 18:36:10.145: INFO: Pod "pod-projected-configmaps-e8cf993d-f823-4d9e-9da1-cbbc0d2aa95d" satisfied condition "Succeeded or Failed" Apr 4 18:36:10.149: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-e8cf993d-f823-4d9e-9da1-cbbc0d2aa95d container projected-configmap-volume-test: STEP: delete the pod Apr 4 18:36:10.279: INFO: Waiting for pod pod-projected-configmaps-e8cf993d-f823-4d9e-9da1-cbbc0d2aa95d to disappear Apr 4 18:36:10.426: INFO: Pod pod-projected-configmaps-e8cf993d-f823-4d9e-9da1-cbbc0d2aa95d no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:36:10.426: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5608" for this suite. • [SLOW TEST:7.074 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":281,"completed":217,"skipped":3659,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:36:10.468: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service multi-endpoint-test in namespace services-3837 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3837 to expose endpoints map[] Apr 4 18:36:10.650: INFO: Get endpoints failed (12.493641ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found Apr 4 18:36:11.652: INFO: successfully validated that service multi-endpoint-test in namespace services-3837 exposes endpoints map[] (1.014769219s elapsed) STEP: Creating pod pod1 in namespace services-3837 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3837 to expose endpoints map[pod1:[100]] Apr 4 18:36:15.961: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.303396906s elapsed, will retry) Apr 4 18:36:21.641: INFO: successfully validated that service multi-endpoint-test in namespace services-3837 exposes endpoints map[pod1:[100]] (9.983901841s elapsed) STEP: Creating pod pod2 in namespace services-3837 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3837 to expose endpoints map[pod1:[100] pod2:[101]] Apr 4 18:36:26.078: INFO: Unexpected endpoints: found map[9a0d2fd8-fd61-4a82-abcd-9580be3bc976:[100]], expected map[pod1:[100] pod2:[101]] (4.413709371s elapsed, will retry) Apr 4 18:36:33.639: INFO: successfully validated that service multi-endpoint-test in namespace services-3837 exposes endpoints map[pod1:[100] pod2:[101]] (11.974292191s elapsed) STEP: Deleting pod pod1 in namespace services-3837 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3837 to expose endpoints map[pod2:[101]] Apr 4 18:36:34.807: INFO: successfully validated that service multi-endpoint-test in namespace services-3837 exposes endpoints map[pod2:[101]] (1.163040999s elapsed) STEP: Deleting pod pod2 in namespace services-3837 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3837 to expose endpoints map[] Apr 4 18:36:36.275: INFO: successfully validated that service multi-endpoint-test in namespace services-3837 exposes endpoints map[] (1.261248357s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:36:36.640: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3837" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:26.182 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":281,"completed":218,"skipped":3673,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:36:36.650: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod Apr 4 18:36:45.921: INFO: Successfully updated pod "annotationupdate2065a016-c8f4-4599-b664-c95fdae59a63" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:36:47.936: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3385" for this suite. • [SLOW TEST:11.291 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":281,"completed":219,"skipped":3687,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:36:47.942: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-upd-177c39ac-a64d-48e6-b65e-4a2a7e108747 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-177c39ac-a64d-48e6-b65e-4a2a7e108747 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:36:54.550: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-248" for this suite. • [SLOW TEST:6.614 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":281,"completed":220,"skipped":3724,"failed":0} S ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:36:54.556: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-dbef3c3f-f4fa-49eb-8c9c-9b67de0ecc9f STEP: Creating a pod to test consume secrets Apr 4 18:36:55.262: INFO: Waiting up to 5m0s for pod "pod-secrets-7234185a-32cb-4d5c-a2b8-462926daa55a" in namespace "secrets-6820" to be "Succeeded or Failed" Apr 4 18:36:55.298: INFO: Pod "pod-secrets-7234185a-32cb-4d5c-a2b8-462926daa55a": Phase="Pending", Reason="", readiness=false. Elapsed: 35.866519ms Apr 4 18:36:57.317: INFO: Pod "pod-secrets-7234185a-32cb-4d5c-a2b8-462926daa55a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054973928s Apr 4 18:36:59.320: INFO: Pod "pod-secrets-7234185a-32cb-4d5c-a2b8-462926daa55a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.058112499s STEP: Saw pod success Apr 4 18:36:59.320: INFO: Pod "pod-secrets-7234185a-32cb-4d5c-a2b8-462926daa55a" satisfied condition "Succeeded or Failed" Apr 4 18:36:59.323: INFO: Trying to get logs from node latest-worker pod pod-secrets-7234185a-32cb-4d5c-a2b8-462926daa55a container secret-volume-test: STEP: delete the pod Apr 4 18:36:59.364: INFO: Waiting for pod pod-secrets-7234185a-32cb-4d5c-a2b8-462926daa55a to disappear Apr 4 18:36:59.388: INFO: Pod pod-secrets-7234185a-32cb-4d5c-a2b8-462926daa55a no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:36:59.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6820" for this suite. STEP: Destroying namespace "secret-namespace-7740" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":281,"completed":221,"skipped":3725,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:36:59.469: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:75 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Apr 4 18:36:59.555: INFO: Pod name rollover-pod: Found 0 pods out of 1 Apr 4 18:37:04.558: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Apr 4 18:37:04.558: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Apr 4 18:37:06.562: INFO: Creating deployment "test-rollover-deployment" Apr 4 18:37:06.600: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Apr 4 18:37:08.606: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Apr 4 18:37:08.613: INFO: Ensure that both replica sets have 1 created replica Apr 4 18:37:08.619: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Apr 4 18:37:08.626: INFO: Updating deployment test-rollover-deployment Apr 4 18:37:08.626: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Apr 4 18:37:10.638: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Apr 4 18:37:10.644: INFO: Make sure deployment "test-rollover-deployment" is complete Apr 4 18:37:10.649: INFO: all replica sets need to contain the pod-template-hash label Apr 4 18:37:10.649: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721622226, loc:(*time.Location)(0x7bcb460)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721622226, loc:(*time.Location)(0x7bcb460)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721622228, loc:(*time.Location)(0x7bcb460)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721622226, loc:(*time.Location)(0x7bcb460)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-78df7bc796\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 4 18:37:12.658: INFO: all replica sets need to contain the pod-template-hash label Apr 4 18:37:12.658: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721622226, loc:(*time.Location)(0x7bcb460)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721622226, loc:(*time.Location)(0x7bcb460)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721622231, loc:(*time.Location)(0x7bcb460)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721622226, loc:(*time.Location)(0x7bcb460)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-78df7bc796\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 4 18:37:14.655: INFO: all replica sets need to contain the pod-template-hash label Apr 4 18:37:14.655: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721622226, loc:(*time.Location)(0x7bcb460)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721622226, loc:(*time.Location)(0x7bcb460)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721622231, loc:(*time.Location)(0x7bcb460)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721622226, loc:(*time.Location)(0x7bcb460)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-78df7bc796\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 4 18:37:16.656: INFO: all replica sets need to contain the pod-template-hash label Apr 4 18:37:16.656: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721622226, loc:(*time.Location)(0x7bcb460)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721622226, loc:(*time.Location)(0x7bcb460)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721622231, loc:(*time.Location)(0x7bcb460)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721622226, loc:(*time.Location)(0x7bcb460)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-78df7bc796\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 4 18:37:18.656: INFO: all replica sets need to contain the pod-template-hash label Apr 4 18:37:18.656: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721622226, loc:(*time.Location)(0x7bcb460)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721622226, loc:(*time.Location)(0x7bcb460)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721622231, loc:(*time.Location)(0x7bcb460)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721622226, loc:(*time.Location)(0x7bcb460)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-78df7bc796\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 4 18:37:20.656: INFO: all replica sets need to contain the pod-template-hash label Apr 4 18:37:20.656: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721622226, loc:(*time.Location)(0x7bcb460)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721622226, loc:(*time.Location)(0x7bcb460)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721622231, loc:(*time.Location)(0x7bcb460)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721622226, loc:(*time.Location)(0x7bcb460)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-78df7bc796\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 4 18:37:22.746: INFO: Apr 4 18:37:22.746: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 Apr 4 18:37:22.755: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-7367 /apis/apps/v1/namespaces/deployment-7367/deployments/test-rollover-deployment 9f54e5ce-0a9c-4ded-a8dc-a53f697710c6 5412128 2 2020-04-04 18:37:06 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002400c38 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-04-04 18:37:06 +0000 UTC,LastTransitionTime:2020-04-04 18:37:06 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-78df7bc796" has successfully progressed.,LastUpdateTime:2020-04-04 18:37:21 +0000 UTC,LastTransitionTime:2020-04-04 18:37:06 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Apr 4 18:37:22.759: INFO: New ReplicaSet "test-rollover-deployment-78df7bc796" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-78df7bc796 deployment-7367 /apis/apps/v1/namespaces/deployment-7367/replicasets/test-rollover-deployment-78df7bc796 d5b01645-4d80-47d9-ba8c-d36a8b67a8ff 5412117 2 2020-04-04 18:37:08 +0000 UTC map[name:rollover-pod pod-template-hash:78df7bc796] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment 9f54e5ce-0a9c-4ded-a8dc-a53f697710c6 0xc0024012d7 0xc0024012d8}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 78df7bc796,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:78df7bc796] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002401348 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Apr 4 18:37:22.759: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Apr 4 18:37:22.759: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-7367 /apis/apps/v1/namespaces/deployment-7367/replicasets/test-rollover-controller 895f39fe-0418-4f92-92b4-5680df9aa624 5412126 2 2020-04-04 18:36:59 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment 9f54e5ce-0a9c-4ded-a8dc-a53f697710c6 0xc0024011ef 0xc002401200}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc002401268 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 4 18:37:22.759: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-f6c94f66c deployment-7367 /apis/apps/v1/namespaces/deployment-7367/replicasets/test-rollover-deployment-f6c94f66c 3557a425-edbc-4510-832d-037e92799569 5412065 2 2020-04-04 18:37:06 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment 9f54e5ce-0a9c-4ded-a8dc-a53f697710c6 0xc0024013b0 0xc0024013b1}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: f6c94f66c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002401428 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 4 18:37:22.762: INFO: Pod "test-rollover-deployment-78df7bc796-tjrlq" is available: &Pod{ObjectMeta:{test-rollover-deployment-78df7bc796-tjrlq test-rollover-deployment-78df7bc796- deployment-7367 /api/v1/namespaces/deployment-7367/pods/test-rollover-deployment-78df7bc796-tjrlq 4be62dd2-5305-43b4-a7f0-1a39df426c21 5412079 0 2020-04-04 18:37:08 +0000 UTC map[name:rollover-pod pod-template-hash:78df7bc796] map[] [{apps/v1 ReplicaSet test-rollover-deployment-78df7bc796 d5b01645-4d80-47d9-ba8c-d36a8b67a8ff 0xc000612e17 0xc000612e18}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-kxzpx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-kxzpx,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-kxzpx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 18:37:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 18:37:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 18:37:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 18:37:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.2.10,StartTime:2020-04-04 18:37:08 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-04 18:37:11 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://2eeb0dc237cb0ce09e083045697393796fa34461cc1d1b2ae15b6ee8558e3b28,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.10,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:37:22.762: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-7367" for this suite. • [SLOW TEST:23.300 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":281,"completed":222,"skipped":3760,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:37:22.770: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 4 18:37:23.907: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 4 18:37:25.971: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721622243, loc:(*time.Location)(0x7bcb460)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721622243, loc:(*time.Location)(0x7bcb460)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721622243, loc:(*time.Location)(0x7bcb460)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721622243, loc:(*time.Location)(0x7bcb460)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 4 18:37:27.975: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721622243, loc:(*time.Location)(0x7bcb460)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721622243, loc:(*time.Location)(0x7bcb460)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721622243, loc:(*time.Location)(0x7bcb460)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721622243, loc:(*time.Location)(0x7bcb460)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 4 18:37:31.012: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:37:31.398: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1670" for this suite. STEP: Destroying namespace "webhook-1670-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.176 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":281,"completed":223,"skipped":3782,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:37:31.946: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Apr 4 18:37:32.481: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7941fd37-bf80-4835-bd17-33d8c4c14444" in namespace "projected-3373" to be "Succeeded or Failed" Apr 4 18:37:32.497: INFO: Pod "downwardapi-volume-7941fd37-bf80-4835-bd17-33d8c4c14444": Phase="Pending", Reason="", readiness=false. Elapsed: 15.873326ms Apr 4 18:37:34.500: INFO: Pod "downwardapi-volume-7941fd37-bf80-4835-bd17-33d8c4c14444": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018822427s Apr 4 18:37:36.534: INFO: Pod "downwardapi-volume-7941fd37-bf80-4835-bd17-33d8c4c14444": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.052548951s STEP: Saw pod success Apr 4 18:37:36.534: INFO: Pod "downwardapi-volume-7941fd37-bf80-4835-bd17-33d8c4c14444" satisfied condition "Succeeded or Failed" Apr 4 18:37:36.649: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-7941fd37-bf80-4835-bd17-33d8c4c14444 container client-container: STEP: delete the pod Apr 4 18:37:36.864: INFO: Waiting for pod downwardapi-volume-7941fd37-bf80-4835-bd17-33d8c4c14444 to disappear Apr 4 18:37:36.886: INFO: Pod downwardapi-volume-7941fd37-bf80-4835-bd17-33d8c4c14444 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:37:36.887: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3373" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":281,"completed":224,"skipped":3799,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:37:36.924: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:37:50.405: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3147" for this suite. • [SLOW TEST:13.490 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":281,"completed":225,"skipped":3815,"failed":0} S ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:37:50.414: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 4 18:37:50.982: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 4 18:37:52.991: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721622270, loc:(*time.Location)(0x7bcb460)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721622270, loc:(*time.Location)(0x7bcb460)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721622271, loc:(*time.Location)(0x7bcb460)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721622270, loc:(*time.Location)(0x7bcb460)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 4 18:37:56.007: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:37:56.064: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-411" for this suite. STEP: Destroying namespace "webhook-411-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.725 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":281,"completed":226,"skipped":3816,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:37:56.140: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Apr 4 18:39:56.241: INFO: Deleting pod "var-expansion-dbb74528-12bd-4eb1-97db-5b9ab4cff5b0" in namespace "var-expansion-9090" Apr 4 18:39:56.246: INFO: Wait up to 5m0s for pod "var-expansion-dbb74528-12bd-4eb1-97db-5b9ab4cff5b0" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:39:58.258: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-9090" for this suite. • [SLOW TEST:122.128 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance]","total":281,"completed":227,"skipped":3863,"failed":0} S ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:39:58.268: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Apr 4 18:39:58.344: INFO: Waiting up to 5m0s for pod "downwardapi-volume-532e0bbe-deb2-4ae1-ac3c-6d7af9008e64" in namespace "projected-2442" to be "Succeeded or Failed" Apr 4 18:39:58.356: INFO: Pod "downwardapi-volume-532e0bbe-deb2-4ae1-ac3c-6d7af9008e64": Phase="Pending", Reason="", readiness=false. Elapsed: 11.758875ms Apr 4 18:40:00.537: INFO: Pod "downwardapi-volume-532e0bbe-deb2-4ae1-ac3c-6d7af9008e64": Phase="Pending", Reason="", readiness=false. Elapsed: 2.193211036s Apr 4 18:40:02.542: INFO: Pod "downwardapi-volume-532e0bbe-deb2-4ae1-ac3c-6d7af9008e64": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.197752076s STEP: Saw pod success Apr 4 18:40:02.542: INFO: Pod "downwardapi-volume-532e0bbe-deb2-4ae1-ac3c-6d7af9008e64" satisfied condition "Succeeded or Failed" Apr 4 18:40:02.546: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-532e0bbe-deb2-4ae1-ac3c-6d7af9008e64 container client-container: STEP: delete the pod Apr 4 18:40:02.591: INFO: Waiting for pod downwardapi-volume-532e0bbe-deb2-4ae1-ac3c-6d7af9008e64 to disappear Apr 4 18:40:02.613: INFO: Pod downwardapi-volume-532e0bbe-deb2-4ae1-ac3c-6d7af9008e64 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:40:02.614: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2442" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":281,"completed":228,"skipped":3864,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:40:02.646: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod Apr 4 18:40:07.325: INFO: Successfully updated pod "annotationupdate75957323-4c46-440d-b4ec-5877879bd2d6" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:40:09.465: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7173" for this suite. • [SLOW TEST:6.827 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":281,"completed":229,"skipped":3872,"failed":0} SSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:40:09.473: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod Apr 4 18:40:14.112: INFO: Successfully updated pod "labelsupdate2633c65d-dfd0-429e-b9db-a48b71055008" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:40:16.164: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5802" for this suite. • [SLOW TEST:6.699 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":281,"completed":230,"skipped":3875,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:40:16.173: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name projected-secret-test-10f31f08-a7fb-404f-8488-e9812e2187c0 STEP: Creating a pod to test consume secrets Apr 4 18:40:16.266: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-30bcfaa9-b822-4344-8616-bd14f7bbdb46" in namespace "projected-6980" to be "Succeeded or Failed" Apr 4 18:40:16.274: INFO: Pod "pod-projected-secrets-30bcfaa9-b822-4344-8616-bd14f7bbdb46": Phase="Pending", Reason="", readiness=false. Elapsed: 8.609057ms Apr 4 18:40:18.279: INFO: Pod "pod-projected-secrets-30bcfaa9-b822-4344-8616-bd14f7bbdb46": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012795239s Apr 4 18:40:20.282: INFO: Pod "pod-projected-secrets-30bcfaa9-b822-4344-8616-bd14f7bbdb46": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016233626s STEP: Saw pod success Apr 4 18:40:20.282: INFO: Pod "pod-projected-secrets-30bcfaa9-b822-4344-8616-bd14f7bbdb46" satisfied condition "Succeeded or Failed" Apr 4 18:40:20.285: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-30bcfaa9-b822-4344-8616-bd14f7bbdb46 container secret-volume-test: STEP: delete the pod Apr 4 18:40:20.317: INFO: Waiting for pod pod-projected-secrets-30bcfaa9-b822-4344-8616-bd14f7bbdb46 to disappear Apr 4 18:40:20.333: INFO: Pod pod-projected-secrets-30bcfaa9-b822-4344-8616-bd14f7bbdb46 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:40:20.333: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6980" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":281,"completed":231,"skipped":3897,"failed":0} SSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:40:20.358: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars Apr 4 18:40:20.407: INFO: Waiting up to 5m0s for pod "downward-api-fdc59545-9d6d-4588-ac34-8d278daa31b6" in namespace "downward-api-5730" to be "Succeeded or Failed" Apr 4 18:40:20.456: INFO: Pod "downward-api-fdc59545-9d6d-4588-ac34-8d278daa31b6": Phase="Pending", Reason="", readiness=false. Elapsed: 48.717973ms Apr 4 18:40:22.459: INFO: Pod "downward-api-fdc59545-9d6d-4588-ac34-8d278daa31b6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052158902s Apr 4 18:40:24.463: INFO: Pod "downward-api-fdc59545-9d6d-4588-ac34-8d278daa31b6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.055842598s STEP: Saw pod success Apr 4 18:40:24.463: INFO: Pod "downward-api-fdc59545-9d6d-4588-ac34-8d278daa31b6" satisfied condition "Succeeded or Failed" Apr 4 18:40:24.466: INFO: Trying to get logs from node latest-worker2 pod downward-api-fdc59545-9d6d-4588-ac34-8d278daa31b6 container dapi-container: STEP: delete the pod Apr 4 18:40:24.484: INFO: Waiting for pod downward-api-fdc59545-9d6d-4588-ac34-8d278daa31b6 to disappear Apr 4 18:40:24.488: INFO: Pod downward-api-fdc59545-9d6d-4588-ac34-8d278daa31b6 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:40:24.488: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5730" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":281,"completed":232,"skipped":3908,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:40:24.495: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:249 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Starting the proxy Apr 4 18:40:24.562: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix888932503/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:40:24.639: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8942" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":281,"completed":233,"skipped":3923,"failed":0} SSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:40:24.664: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Apr 4 18:40:28.852: INFO: &Pod{ObjectMeta:{send-events-f3a33174-1b3e-4e6c-913d-28d1eb5c68a5 events-3284 /api/v1/namespaces/events-3284/pods/send-events-f3a33174-1b3e-4e6c-913d-28d1eb5c68a5 841f2e99-1e56-4450-8e83-8674fc1287ad 5413050 0 2020-04-04 18:40:24 +0000 UTC map[name:foo time:819528327] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pkl2s,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pkl2s,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pkl2s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 18:40:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 18:40:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 18:40:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 18:40:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.2.16,StartTime:2020-04-04 18:40:24 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-04 18:40:27 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://ad85545d63525dc89c00f167840ada86a2a515d7837534ef4388952a72b045fa,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.16,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod Apr 4 18:40:30.857: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Apr 4 18:40:32.862: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:40:32.867: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-3284" for this suite. • [SLOW TEST:8.239 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":281,"completed":234,"skipped":3931,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:40:32.904: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:135 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Apr 4 18:40:33.005: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 18:40:33.009: INFO: Number of nodes with available pods: 0 Apr 4 18:40:33.009: INFO: Node latest-worker is running more than one daemon pod Apr 4 18:40:34.014: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 18:40:34.017: INFO: Number of nodes with available pods: 0 Apr 4 18:40:34.017: INFO: Node latest-worker is running more than one daemon pod Apr 4 18:40:35.036: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 18:40:35.040: INFO: Number of nodes with available pods: 0 Apr 4 18:40:35.040: INFO: Node latest-worker is running more than one daemon pod Apr 4 18:40:36.013: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 18:40:36.016: INFO: Number of nodes with available pods: 0 Apr 4 18:40:36.016: INFO: Node latest-worker is running more than one daemon pod Apr 4 18:40:37.015: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 18:40:37.019: INFO: Number of nodes with available pods: 2 Apr 4 18:40:37.019: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Apr 4 18:40:37.035: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 18:40:37.038: INFO: Number of nodes with available pods: 1 Apr 4 18:40:37.038: INFO: Node latest-worker is running more than one daemon pod Apr 4 18:40:38.042: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 18:40:38.044: INFO: Number of nodes with available pods: 1 Apr 4 18:40:38.044: INFO: Node latest-worker is running more than one daemon pod Apr 4 18:40:39.042: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 18:40:39.045: INFO: Number of nodes with available pods: 1 Apr 4 18:40:39.045: INFO: Node latest-worker is running more than one daemon pod Apr 4 18:40:40.043: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 18:40:40.047: INFO: Number of nodes with available pods: 1 Apr 4 18:40:40.047: INFO: Node latest-worker is running more than one daemon pod Apr 4 18:40:41.043: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 18:40:41.046: INFO: Number of nodes with available pods: 1 Apr 4 18:40:41.047: INFO: Node latest-worker is running more than one daemon pod Apr 4 18:40:42.043: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 18:40:42.046: INFO: Number of nodes with available pods: 1 Apr 4 18:40:42.046: INFO: Node latest-worker is running more than one daemon pod Apr 4 18:40:43.044: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 18:40:43.048: INFO: Number of nodes with available pods: 1 Apr 4 18:40:43.048: INFO: Node latest-worker is running more than one daemon pod Apr 4 18:40:44.043: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 18:40:44.047: INFO: Number of nodes with available pods: 2 Apr 4 18:40:44.047: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:101 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-460, will wait for the garbage collector to delete the pods Apr 4 18:40:44.109: INFO: Deleting DaemonSet.extensions daemon-set took: 6.175962ms Apr 4 18:40:44.409: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.275759ms Apr 4 18:40:53.113: INFO: Number of nodes with available pods: 0 Apr 4 18:40:53.113: INFO: Number of running nodes: 0, number of available pods: 0 Apr 4 18:40:53.116: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-460/daemonsets","resourceVersion":"5413212"},"items":null} Apr 4 18:40:53.118: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-460/pods","resourceVersion":"5413212"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:40:53.129: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-460" for this suite. • [SLOW TEST:20.234 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":281,"completed":235,"skipped":3949,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:40:53.138: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:52 [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Apr 4 18:40:53.223: INFO: Pod name pod-release: Found 0 pods out of 1 Apr 4 18:40:58.232: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:40:58.250: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-5311" for this suite. • [SLOW TEST:5.201 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":281,"completed":236,"skipped":3974,"failed":0} SSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:40:58.339: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars Apr 4 18:40:58.426: INFO: Waiting up to 5m0s for pod "downward-api-c8e95834-3946-4a08-bdaf-17dee9369f97" in namespace "downward-api-2268" to be "Succeeded or Failed" Apr 4 18:40:58.430: INFO: Pod "downward-api-c8e95834-3946-4a08-bdaf-17dee9369f97": Phase="Pending", Reason="", readiness=false. Elapsed: 3.855213ms Apr 4 18:41:00.434: INFO: Pod "downward-api-c8e95834-3946-4a08-bdaf-17dee9369f97": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0080029s Apr 4 18:41:02.439: INFO: Pod "downward-api-c8e95834-3946-4a08-bdaf-17dee9369f97": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012804698s STEP: Saw pod success Apr 4 18:41:02.439: INFO: Pod "downward-api-c8e95834-3946-4a08-bdaf-17dee9369f97" satisfied condition "Succeeded or Failed" Apr 4 18:41:02.442: INFO: Trying to get logs from node latest-worker pod downward-api-c8e95834-3946-4a08-bdaf-17dee9369f97 container dapi-container: STEP: delete the pod Apr 4 18:41:02.474: INFO: Waiting for pod downward-api-c8e95834-3946-4a08-bdaf-17dee9369f97 to disappear Apr 4 18:41:02.490: INFO: Pod downward-api-c8e95834-3946-4a08-bdaf-17dee9369f97 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:41:02.490: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2268" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":281,"completed":237,"skipped":3982,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:41:02.499: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-1528.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-1528.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-1528.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1528.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-1528.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-1528.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-1528.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-1528.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1528.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-1528.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-1528.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-1528.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-1528.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-1528.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-1528.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-1528.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-1528.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1528.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 4 18:41:08.740: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-1528.svc.cluster.local from pod dns-1528/dns-test-d7525525-4aab-4b1d-b780-a05824cb1432: the server could not find the requested resource (get pods dns-test-d7525525-4aab-4b1d-b780-a05824cb1432) Apr 4 18:41:08.744: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1528.svc.cluster.local from pod dns-1528/dns-test-d7525525-4aab-4b1d-b780-a05824cb1432: the server could not find the requested resource (get pods dns-test-d7525525-4aab-4b1d-b780-a05824cb1432) Apr 4 18:41:08.746: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-1528.svc.cluster.local from pod dns-1528/dns-test-d7525525-4aab-4b1d-b780-a05824cb1432: the server could not find the requested resource (get pods dns-test-d7525525-4aab-4b1d-b780-a05824cb1432) Apr 4 18:41:08.749: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-1528.svc.cluster.local from pod dns-1528/dns-test-d7525525-4aab-4b1d-b780-a05824cb1432: the server could not find the requested resource (get pods dns-test-d7525525-4aab-4b1d-b780-a05824cb1432) Apr 4 18:41:08.758: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-1528.svc.cluster.local from pod dns-1528/dns-test-d7525525-4aab-4b1d-b780-a05824cb1432: the server could not find the requested resource (get pods dns-test-d7525525-4aab-4b1d-b780-a05824cb1432) Apr 4 18:41:08.761: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-1528.svc.cluster.local from pod dns-1528/dns-test-d7525525-4aab-4b1d-b780-a05824cb1432: the server could not find the requested resource (get pods dns-test-d7525525-4aab-4b1d-b780-a05824cb1432) Apr 4 18:41:08.763: INFO: Unable to read jessie_udp@dns-test-service-2.dns-1528.svc.cluster.local from pod dns-1528/dns-test-d7525525-4aab-4b1d-b780-a05824cb1432: the server could not find the requested resource (get pods dns-test-d7525525-4aab-4b1d-b780-a05824cb1432) Apr 4 18:41:08.766: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-1528.svc.cluster.local from pod dns-1528/dns-test-d7525525-4aab-4b1d-b780-a05824cb1432: the server could not find the requested resource (get pods dns-test-d7525525-4aab-4b1d-b780-a05824cb1432) Apr 4 18:41:08.782: INFO: Lookups using dns-1528/dns-test-d7525525-4aab-4b1d-b780-a05824cb1432 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-1528.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1528.svc.cluster.local wheezy_udp@dns-test-service-2.dns-1528.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-1528.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-1528.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-1528.svc.cluster.local jessie_udp@dns-test-service-2.dns-1528.svc.cluster.local jessie_tcp@dns-test-service-2.dns-1528.svc.cluster.local] Apr 4 18:41:13.787: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-1528.svc.cluster.local from pod dns-1528/dns-test-d7525525-4aab-4b1d-b780-a05824cb1432: the server could not find the requested resource (get pods dns-test-d7525525-4aab-4b1d-b780-a05824cb1432) Apr 4 18:41:13.791: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1528.svc.cluster.local from pod dns-1528/dns-test-d7525525-4aab-4b1d-b780-a05824cb1432: the server could not find the requested resource (get pods dns-test-d7525525-4aab-4b1d-b780-a05824cb1432) Apr 4 18:41:13.794: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-1528.svc.cluster.local from pod dns-1528/dns-test-d7525525-4aab-4b1d-b780-a05824cb1432: the server could not find the requested resource (get pods dns-test-d7525525-4aab-4b1d-b780-a05824cb1432) Apr 4 18:41:13.798: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-1528.svc.cluster.local from pod dns-1528/dns-test-d7525525-4aab-4b1d-b780-a05824cb1432: the server could not find the requested resource (get pods dns-test-d7525525-4aab-4b1d-b780-a05824cb1432) Apr 4 18:41:13.807: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-1528.svc.cluster.local from pod dns-1528/dns-test-d7525525-4aab-4b1d-b780-a05824cb1432: the server could not find the requested resource (get pods dns-test-d7525525-4aab-4b1d-b780-a05824cb1432) Apr 4 18:41:13.811: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-1528.svc.cluster.local from pod dns-1528/dns-test-d7525525-4aab-4b1d-b780-a05824cb1432: the server could not find the requested resource (get pods dns-test-d7525525-4aab-4b1d-b780-a05824cb1432) Apr 4 18:41:13.814: INFO: Unable to read jessie_udp@dns-test-service-2.dns-1528.svc.cluster.local from pod dns-1528/dns-test-d7525525-4aab-4b1d-b780-a05824cb1432: the server could not find the requested resource (get pods dns-test-d7525525-4aab-4b1d-b780-a05824cb1432) Apr 4 18:41:13.838: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-1528.svc.cluster.local from pod dns-1528/dns-test-d7525525-4aab-4b1d-b780-a05824cb1432: the server could not find the requested resource (get pods dns-test-d7525525-4aab-4b1d-b780-a05824cb1432) Apr 4 18:41:13.847: INFO: Lookups using dns-1528/dns-test-d7525525-4aab-4b1d-b780-a05824cb1432 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-1528.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1528.svc.cluster.local wheezy_udp@dns-test-service-2.dns-1528.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-1528.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-1528.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-1528.svc.cluster.local jessie_udp@dns-test-service-2.dns-1528.svc.cluster.local jessie_tcp@dns-test-service-2.dns-1528.svc.cluster.local] Apr 4 18:41:18.787: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-1528.svc.cluster.local from pod dns-1528/dns-test-d7525525-4aab-4b1d-b780-a05824cb1432: the server could not find the requested resource (get pods dns-test-d7525525-4aab-4b1d-b780-a05824cb1432) Apr 4 18:41:18.790: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1528.svc.cluster.local from pod dns-1528/dns-test-d7525525-4aab-4b1d-b780-a05824cb1432: the server could not find the requested resource (get pods dns-test-d7525525-4aab-4b1d-b780-a05824cb1432) Apr 4 18:41:18.793: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-1528.svc.cluster.local from pod dns-1528/dns-test-d7525525-4aab-4b1d-b780-a05824cb1432: the server could not find the requested resource (get pods dns-test-d7525525-4aab-4b1d-b780-a05824cb1432) Apr 4 18:41:18.796: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-1528.svc.cluster.local from pod dns-1528/dns-test-d7525525-4aab-4b1d-b780-a05824cb1432: the server could not find the requested resource (get pods dns-test-d7525525-4aab-4b1d-b780-a05824cb1432) Apr 4 18:41:18.803: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-1528.svc.cluster.local from pod dns-1528/dns-test-d7525525-4aab-4b1d-b780-a05824cb1432: the server could not find the requested resource (get pods dns-test-d7525525-4aab-4b1d-b780-a05824cb1432) Apr 4 18:41:18.806: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-1528.svc.cluster.local from pod dns-1528/dns-test-d7525525-4aab-4b1d-b780-a05824cb1432: the server could not find the requested resource (get pods dns-test-d7525525-4aab-4b1d-b780-a05824cb1432) Apr 4 18:41:18.808: INFO: Unable to read jessie_udp@dns-test-service-2.dns-1528.svc.cluster.local from pod dns-1528/dns-test-d7525525-4aab-4b1d-b780-a05824cb1432: the server could not find the requested resource (get pods dns-test-d7525525-4aab-4b1d-b780-a05824cb1432) Apr 4 18:41:18.810: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-1528.svc.cluster.local from pod dns-1528/dns-test-d7525525-4aab-4b1d-b780-a05824cb1432: the server could not find the requested resource (get pods dns-test-d7525525-4aab-4b1d-b780-a05824cb1432) Apr 4 18:41:18.815: INFO: Lookups using dns-1528/dns-test-d7525525-4aab-4b1d-b780-a05824cb1432 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-1528.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1528.svc.cluster.local wheezy_udp@dns-test-service-2.dns-1528.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-1528.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-1528.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-1528.svc.cluster.local jessie_udp@dns-test-service-2.dns-1528.svc.cluster.local jessie_tcp@dns-test-service-2.dns-1528.svc.cluster.local] Apr 4 18:41:23.787: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-1528.svc.cluster.local from pod dns-1528/dns-test-d7525525-4aab-4b1d-b780-a05824cb1432: the server could not find the requested resource (get pods dns-test-d7525525-4aab-4b1d-b780-a05824cb1432) Apr 4 18:41:23.791: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1528.svc.cluster.local from pod dns-1528/dns-test-d7525525-4aab-4b1d-b780-a05824cb1432: the server could not find the requested resource (get pods dns-test-d7525525-4aab-4b1d-b780-a05824cb1432) Apr 4 18:41:23.794: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-1528.svc.cluster.local from pod dns-1528/dns-test-d7525525-4aab-4b1d-b780-a05824cb1432: the server could not find the requested resource (get pods dns-test-d7525525-4aab-4b1d-b780-a05824cb1432) Apr 4 18:41:23.797: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-1528.svc.cluster.local from pod dns-1528/dns-test-d7525525-4aab-4b1d-b780-a05824cb1432: the server could not find the requested resource (get pods dns-test-d7525525-4aab-4b1d-b780-a05824cb1432) Apr 4 18:41:23.806: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-1528.svc.cluster.local from pod dns-1528/dns-test-d7525525-4aab-4b1d-b780-a05824cb1432: the server could not find the requested resource (get pods dns-test-d7525525-4aab-4b1d-b780-a05824cb1432) Apr 4 18:41:23.809: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-1528.svc.cluster.local from pod dns-1528/dns-test-d7525525-4aab-4b1d-b780-a05824cb1432: the server could not find the requested resource (get pods dns-test-d7525525-4aab-4b1d-b780-a05824cb1432) Apr 4 18:41:23.813: INFO: Unable to read jessie_udp@dns-test-service-2.dns-1528.svc.cluster.local from pod dns-1528/dns-test-d7525525-4aab-4b1d-b780-a05824cb1432: the server could not find the requested resource (get pods dns-test-d7525525-4aab-4b1d-b780-a05824cb1432) Apr 4 18:41:23.815: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-1528.svc.cluster.local from pod dns-1528/dns-test-d7525525-4aab-4b1d-b780-a05824cb1432: the server could not find the requested resource (get pods dns-test-d7525525-4aab-4b1d-b780-a05824cb1432) Apr 4 18:41:23.820: INFO: Lookups using dns-1528/dns-test-d7525525-4aab-4b1d-b780-a05824cb1432 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-1528.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1528.svc.cluster.local wheezy_udp@dns-test-service-2.dns-1528.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-1528.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-1528.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-1528.svc.cluster.local jessie_udp@dns-test-service-2.dns-1528.svc.cluster.local jessie_tcp@dns-test-service-2.dns-1528.svc.cluster.local] Apr 4 18:41:28.787: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-1528.svc.cluster.local from pod dns-1528/dns-test-d7525525-4aab-4b1d-b780-a05824cb1432: the server could not find the requested resource (get pods dns-test-d7525525-4aab-4b1d-b780-a05824cb1432) Apr 4 18:41:28.791: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1528.svc.cluster.local from pod dns-1528/dns-test-d7525525-4aab-4b1d-b780-a05824cb1432: the server could not find the requested resource (get pods dns-test-d7525525-4aab-4b1d-b780-a05824cb1432) Apr 4 18:41:28.794: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-1528.svc.cluster.local from pod dns-1528/dns-test-d7525525-4aab-4b1d-b780-a05824cb1432: the server could not find the requested resource (get pods dns-test-d7525525-4aab-4b1d-b780-a05824cb1432) Apr 4 18:41:28.798: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-1528.svc.cluster.local from pod dns-1528/dns-test-d7525525-4aab-4b1d-b780-a05824cb1432: the server could not find the requested resource (get pods dns-test-d7525525-4aab-4b1d-b780-a05824cb1432) Apr 4 18:41:28.809: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-1528.svc.cluster.local from pod dns-1528/dns-test-d7525525-4aab-4b1d-b780-a05824cb1432: the server could not find the requested resource (get pods dns-test-d7525525-4aab-4b1d-b780-a05824cb1432) Apr 4 18:41:28.812: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-1528.svc.cluster.local from pod dns-1528/dns-test-d7525525-4aab-4b1d-b780-a05824cb1432: the server could not find the requested resource (get pods dns-test-d7525525-4aab-4b1d-b780-a05824cb1432) Apr 4 18:41:28.815: INFO: Unable to read jessie_udp@dns-test-service-2.dns-1528.svc.cluster.local from pod dns-1528/dns-test-d7525525-4aab-4b1d-b780-a05824cb1432: the server could not find the requested resource (get pods dns-test-d7525525-4aab-4b1d-b780-a05824cb1432) Apr 4 18:41:28.818: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-1528.svc.cluster.local from pod dns-1528/dns-test-d7525525-4aab-4b1d-b780-a05824cb1432: the server could not find the requested resource (get pods dns-test-d7525525-4aab-4b1d-b780-a05824cb1432) Apr 4 18:41:28.825: INFO: Lookups using dns-1528/dns-test-d7525525-4aab-4b1d-b780-a05824cb1432 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-1528.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1528.svc.cluster.local wheezy_udp@dns-test-service-2.dns-1528.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-1528.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-1528.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-1528.svc.cluster.local jessie_udp@dns-test-service-2.dns-1528.svc.cluster.local jessie_tcp@dns-test-service-2.dns-1528.svc.cluster.local] Apr 4 18:41:33.786: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-1528.svc.cluster.local from pod dns-1528/dns-test-d7525525-4aab-4b1d-b780-a05824cb1432: the server could not find the requested resource (get pods dns-test-d7525525-4aab-4b1d-b780-a05824cb1432) Apr 4 18:41:33.790: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1528.svc.cluster.local from pod dns-1528/dns-test-d7525525-4aab-4b1d-b780-a05824cb1432: the server could not find the requested resource (get pods dns-test-d7525525-4aab-4b1d-b780-a05824cb1432) Apr 4 18:41:33.793: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-1528.svc.cluster.local from pod dns-1528/dns-test-d7525525-4aab-4b1d-b780-a05824cb1432: the server could not find the requested resource (get pods dns-test-d7525525-4aab-4b1d-b780-a05824cb1432) Apr 4 18:41:33.796: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-1528.svc.cluster.local from pod dns-1528/dns-test-d7525525-4aab-4b1d-b780-a05824cb1432: the server could not find the requested resource (get pods dns-test-d7525525-4aab-4b1d-b780-a05824cb1432) Apr 4 18:41:33.805: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-1528.svc.cluster.local from pod dns-1528/dns-test-d7525525-4aab-4b1d-b780-a05824cb1432: the server could not find the requested resource (get pods dns-test-d7525525-4aab-4b1d-b780-a05824cb1432) Apr 4 18:41:33.808: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-1528.svc.cluster.local from pod dns-1528/dns-test-d7525525-4aab-4b1d-b780-a05824cb1432: the server could not find the requested resource (get pods dns-test-d7525525-4aab-4b1d-b780-a05824cb1432) Apr 4 18:41:33.811: INFO: Unable to read jessie_udp@dns-test-service-2.dns-1528.svc.cluster.local from pod dns-1528/dns-test-d7525525-4aab-4b1d-b780-a05824cb1432: the server could not find the requested resource (get pods dns-test-d7525525-4aab-4b1d-b780-a05824cb1432) Apr 4 18:41:33.813: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-1528.svc.cluster.local from pod dns-1528/dns-test-d7525525-4aab-4b1d-b780-a05824cb1432: the server could not find the requested resource (get pods dns-test-d7525525-4aab-4b1d-b780-a05824cb1432) Apr 4 18:41:33.819: INFO: Lookups using dns-1528/dns-test-d7525525-4aab-4b1d-b780-a05824cb1432 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-1528.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1528.svc.cluster.local wheezy_udp@dns-test-service-2.dns-1528.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-1528.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-1528.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-1528.svc.cluster.local jessie_udp@dns-test-service-2.dns-1528.svc.cluster.local jessie_tcp@dns-test-service-2.dns-1528.svc.cluster.local] Apr 4 18:41:38.821: INFO: DNS probes using dns-1528/dns-test-d7525525-4aab-4b1d-b780-a05824cb1432 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:41:39.339: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1528" for this suite. • [SLOW TEST:36.880 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":281,"completed":238,"skipped":4017,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:41:39.380: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name s-test-opt-del-89498980-9ba8-4cd7-9493-fe2f4d10a85a STEP: Creating secret with name s-test-opt-upd-a92b54dd-fadf-49d8-9c6b-7945d56339aa STEP: Creating the pod STEP: Deleting secret s-test-opt-del-89498980-9ba8-4cd7-9493-fe2f4d10a85a STEP: Updating secret s-test-opt-upd-a92b54dd-fadf-49d8-9c6b-7945d56339aa STEP: Creating secret with name s-test-opt-create-5f84180a-8e3e-4411-9c77-fa38d0bc1194 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:42:53.961: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2407" for this suite. • [SLOW TEST:74.587 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":281,"completed":239,"skipped":4060,"failed":0} SSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:42:53.968: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:43:05.177: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1197" for this suite. • [SLOW TEST:11.217 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":281,"completed":240,"skipped":4063,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:43:05.185: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Apr 4 18:43:05.256: INFO: Waiting up to 5m0s for pod "busybox-user-65534-a1a0d73c-cd6f-48fe-a23d-04dc58595b38" in namespace "security-context-test-5574" to be "Succeeded or Failed" Apr 4 18:43:05.259: INFO: Pod "busybox-user-65534-a1a0d73c-cd6f-48fe-a23d-04dc58595b38": Phase="Pending", Reason="", readiness=false. Elapsed: 3.161205ms Apr 4 18:43:07.390: INFO: Pod "busybox-user-65534-a1a0d73c-cd6f-48fe-a23d-04dc58595b38": Phase="Pending", Reason="", readiness=false. Elapsed: 2.1344514s Apr 4 18:43:09.394: INFO: Pod "busybox-user-65534-a1a0d73c-cd6f-48fe-a23d-04dc58595b38": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.138142213s Apr 4 18:43:09.394: INFO: Pod "busybox-user-65534-a1a0d73c-cd6f-48fe-a23d-04dc58595b38" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:43:09.394: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-5574" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":281,"completed":241,"skipped":4084,"failed":0} SS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:43:09.402: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on tmpfs Apr 4 18:43:09.627: INFO: Waiting up to 5m0s for pod "pod-434f5647-0c92-4830-b9f3-a09bae652e00" in namespace "emptydir-5242" to be "Succeeded or Failed" Apr 4 18:43:09.719: INFO: Pod "pod-434f5647-0c92-4830-b9f3-a09bae652e00": Phase="Pending", Reason="", readiness=false. Elapsed: 92.102064ms Apr 4 18:43:11.749: INFO: Pod "pod-434f5647-0c92-4830-b9f3-a09bae652e00": Phase="Pending", Reason="", readiness=false. Elapsed: 2.121964113s Apr 4 18:43:13.753: INFO: Pod "pod-434f5647-0c92-4830-b9f3-a09bae652e00": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.125760185s STEP: Saw pod success Apr 4 18:43:13.753: INFO: Pod "pod-434f5647-0c92-4830-b9f3-a09bae652e00" satisfied condition "Succeeded or Failed" Apr 4 18:43:13.757: INFO: Trying to get logs from node latest-worker pod pod-434f5647-0c92-4830-b9f3-a09bae652e00 container test-container: STEP: delete the pod Apr 4 18:43:13.793: INFO: Waiting for pod pod-434f5647-0c92-4830-b9f3-a09bae652e00 to disappear Apr 4 18:43:13.857: INFO: Pod pod-434f5647-0c92-4830-b9f3-a09bae652e00 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:43:13.857: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5242" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":281,"completed":242,"skipped":4086,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:43:13.863: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Apr 4 18:43:18.979: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:43:19.993: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-8596" for this suite. • [SLOW TEST:6.138 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":281,"completed":243,"skipped":4116,"failed":0} S ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:43:20.002: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Apr 4 18:43:20.104: INFO: Waiting up to 5m0s for pod "downwardapi-volume-891b6ad9-357c-484d-a214-ad3cfd1df6a4" in namespace "downward-api-9007" to be "Succeeded or Failed" Apr 4 18:43:20.108: INFO: Pod "downwardapi-volume-891b6ad9-357c-484d-a214-ad3cfd1df6a4": Phase="Pending", Reason="", readiness=false. Elapsed: 3.930038ms Apr 4 18:43:22.127: INFO: Pod "downwardapi-volume-891b6ad9-357c-484d-a214-ad3cfd1df6a4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022278298s Apr 4 18:43:24.131: INFO: Pod "downwardapi-volume-891b6ad9-357c-484d-a214-ad3cfd1df6a4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026694822s STEP: Saw pod success Apr 4 18:43:24.131: INFO: Pod "downwardapi-volume-891b6ad9-357c-484d-a214-ad3cfd1df6a4" satisfied condition "Succeeded or Failed" Apr 4 18:43:24.134: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-891b6ad9-357c-484d-a214-ad3cfd1df6a4 container client-container: STEP: delete the pod Apr 4 18:43:24.152: INFO: Waiting for pod downwardapi-volume-891b6ad9-357c-484d-a214-ad3cfd1df6a4 to disappear Apr 4 18:43:24.204: INFO: Pod downwardapi-volume-891b6ad9-357c-484d-a214-ad3cfd1df6a4 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:43:24.204: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9007" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":281,"completed":244,"skipped":4117,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:43:24.212: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Apr 4 18:43:24.280: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-22a2f163-b949-4365-9e2a-9b4305e41faa" in namespace "security-context-test-2642" to be "Succeeded or Failed" Apr 4 18:43:24.283: INFO: Pod "busybox-privileged-false-22a2f163-b949-4365-9e2a-9b4305e41faa": Phase="Pending", Reason="", readiness=false. Elapsed: 3.373974ms Apr 4 18:43:26.288: INFO: Pod "busybox-privileged-false-22a2f163-b949-4365-9e2a-9b4305e41faa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008102399s Apr 4 18:43:28.342: INFO: Pod "busybox-privileged-false-22a2f163-b949-4365-9e2a-9b4305e41faa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.062297476s Apr 4 18:43:28.342: INFO: Pod "busybox-privileged-false-22a2f163-b949-4365-9e2a-9b4305e41faa" satisfied condition "Succeeded or Failed" Apr 4 18:43:28.350: INFO: Got logs for pod "busybox-privileged-false-22a2f163-b949-4365-9e2a-9b4305e41faa": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:43:28.350: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-2642" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":281,"completed":245,"skipped":4152,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:43:28.436: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-9073 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-9073 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-9073 Apr 4 18:43:28.781: INFO: Found 0 stateful pods, waiting for 1 Apr 4 18:43:38.787: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Apr 4 18:43:38.791: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9073 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 4 18:43:41.604: INFO: stderr: "I0404 18:43:41.477098 3555 log.go:172] (0xc0005bbef0) (0xc0004f2b40) Create stream\nI0404 18:43:41.477278 3555 log.go:172] (0xc0005bbef0) (0xc0004f2b40) Stream added, broadcasting: 1\nI0404 18:43:41.479075 3555 log.go:172] (0xc0005bbef0) Reply frame received for 1\nI0404 18:43:41.479116 3555 log.go:172] (0xc0005bbef0) (0xc0005da000) Create stream\nI0404 18:43:41.479125 3555 log.go:172] (0xc0005bbef0) (0xc0005da000) Stream added, broadcasting: 3\nI0404 18:43:41.479945 3555 log.go:172] (0xc0005bbef0) Reply frame received for 3\nI0404 18:43:41.480002 3555 log.go:172] (0xc0005bbef0) (0xc000644000) Create stream\nI0404 18:43:41.480031 3555 log.go:172] (0xc0005bbef0) (0xc000644000) Stream added, broadcasting: 5\nI0404 18:43:41.480779 3555 log.go:172] (0xc0005bbef0) Reply frame received for 5\nI0404 18:43:41.565458 3555 log.go:172] (0xc0005bbef0) Data frame received for 5\nI0404 18:43:41.565486 3555 log.go:172] (0xc000644000) (5) Data frame handling\nI0404 18:43:41.565503 3555 log.go:172] (0xc000644000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0404 18:43:41.596081 3555 log.go:172] (0xc0005bbef0) Data frame received for 3\nI0404 18:43:41.596095 3555 log.go:172] (0xc0005da000) (3) Data frame handling\nI0404 18:43:41.596112 3555 log.go:172] (0xc0005da000) (3) Data frame sent\nI0404 18:43:41.596120 3555 log.go:172] (0xc0005bbef0) Data frame received for 3\nI0404 18:43:41.596127 3555 log.go:172] (0xc0005da000) (3) Data frame handling\nI0404 18:43:41.596200 3555 log.go:172] (0xc0005bbef0) Data frame received for 5\nI0404 18:43:41.596247 3555 log.go:172] (0xc000644000) (5) Data frame handling\nI0404 18:43:41.598276 3555 log.go:172] (0xc0005bbef0) Data frame received for 1\nI0404 18:43:41.598315 3555 log.go:172] (0xc0004f2b40) (1) Data frame handling\nI0404 18:43:41.598349 3555 log.go:172] (0xc0004f2b40) (1) Data frame sent\nI0404 18:43:41.598384 3555 log.go:172] (0xc0005bbef0) (0xc0004f2b40) Stream removed, broadcasting: 1\nI0404 18:43:41.598425 3555 log.go:172] (0xc0005bbef0) Go away received\nI0404 18:43:41.598792 3555 log.go:172] (0xc0005bbef0) (0xc0004f2b40) Stream removed, broadcasting: 1\nI0404 18:43:41.598811 3555 log.go:172] (0xc0005bbef0) (0xc0005da000) Stream removed, broadcasting: 3\nI0404 18:43:41.598836 3555 log.go:172] (0xc0005bbef0) (0xc000644000) Stream removed, broadcasting: 5\n" Apr 4 18:43:41.604: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 4 18:43:41.604: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 4 18:43:41.608: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Apr 4 18:43:51.613: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 4 18:43:51.613: INFO: Waiting for statefulset status.replicas updated to 0 Apr 4 18:43:51.667: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999561s Apr 4 18:43:52.690: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.957538743s Apr 4 18:43:53.695: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.934994834s Apr 4 18:43:54.699: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.930043771s Apr 4 18:43:55.703: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.925701236s Apr 4 18:43:56.708: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.921176477s Apr 4 18:43:57.712: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.916870324s Apr 4 18:43:58.716: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.913057788s Apr 4 18:43:59.720: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.908824029s Apr 4 18:44:00.725: INFO: Verifying statefulset ss doesn't scale past 1 for another 904.042777ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-9073 Apr 4 18:44:01.729: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9073 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 4 18:44:01.941: INFO: stderr: "I0404 18:44:01.852438 3588 log.go:172] (0xc00067c9a0) (0xc0008d6280) Create stream\nI0404 18:44:01.852498 3588 log.go:172] (0xc00067c9a0) (0xc0008d6280) Stream added, broadcasting: 1\nI0404 18:44:01.855247 3588 log.go:172] (0xc00067c9a0) Reply frame received for 1\nI0404 18:44:01.855295 3588 log.go:172] (0xc00067c9a0) (0xc0003d4a00) Create stream\nI0404 18:44:01.855316 3588 log.go:172] (0xc00067c9a0) (0xc0003d4a00) Stream added, broadcasting: 3\nI0404 18:44:01.856134 3588 log.go:172] (0xc00067c9a0) Reply frame received for 3\nI0404 18:44:01.856173 3588 log.go:172] (0xc00067c9a0) (0xc000677040) Create stream\nI0404 18:44:01.856193 3588 log.go:172] (0xc00067c9a0) (0xc000677040) Stream added, broadcasting: 5\nI0404 18:44:01.856972 3588 log.go:172] (0xc00067c9a0) Reply frame received for 5\nI0404 18:44:01.929744 3588 log.go:172] (0xc00067c9a0) Data frame received for 3\nI0404 18:44:01.929777 3588 log.go:172] (0xc0003d4a00) (3) Data frame handling\nI0404 18:44:01.929808 3588 log.go:172] (0xc0003d4a00) (3) Data frame sent\nI0404 18:44:01.929825 3588 log.go:172] (0xc00067c9a0) Data frame received for 3\nI0404 18:44:01.929841 3588 log.go:172] (0xc0003d4a00) (3) Data frame handling\nI0404 18:44:01.930047 3588 log.go:172] (0xc00067c9a0) Data frame received for 5\nI0404 18:44:01.930082 3588 log.go:172] (0xc000677040) (5) Data frame handling\nI0404 18:44:01.930106 3588 log.go:172] (0xc000677040) (5) Data frame sent\nI0404 18:44:01.930124 3588 log.go:172] (0xc00067c9a0) Data frame received for 5\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0404 18:44:01.930134 3588 log.go:172] (0xc000677040) (5) Data frame handling\nI0404 18:44:01.931983 3588 log.go:172] (0xc00067c9a0) Data frame received for 1\nI0404 18:44:01.932006 3588 log.go:172] (0xc0008d6280) (1) Data frame handling\nI0404 18:44:01.932017 3588 log.go:172] (0xc0008d6280) (1) Data frame sent\nI0404 18:44:01.932030 3588 log.go:172] (0xc00067c9a0) (0xc0008d6280) Stream removed, broadcasting: 1\nI0404 18:44:01.932047 3588 log.go:172] (0xc00067c9a0) Go away received\nI0404 18:44:01.932625 3588 log.go:172] (0xc00067c9a0) (0xc0008d6280) Stream removed, broadcasting: 1\nI0404 18:44:01.932666 3588 log.go:172] (0xc00067c9a0) (0xc0003d4a00) Stream removed, broadcasting: 3\nI0404 18:44:01.932690 3588 log.go:172] (0xc00067c9a0) (0xc000677040) Stream removed, broadcasting: 5\n" Apr 4 18:44:01.941: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 4 18:44:01.941: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 4 18:44:01.945: INFO: Found 1 stateful pods, waiting for 3 Apr 4 18:44:11.950: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Apr 4 18:44:11.950: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Apr 4 18:44:11.950: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Apr 4 18:44:11.958: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9073 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 4 18:44:12.171: INFO: stderr: "I0404 18:44:12.076157 3609 log.go:172] (0xc00003a0b0) (0xc0003700a0) Create stream\nI0404 18:44:12.076217 3609 log.go:172] (0xc00003a0b0) (0xc0003700a0) Stream added, broadcasting: 1\nI0404 18:44:12.078026 3609 log.go:172] (0xc00003a0b0) Reply frame received for 1\nI0404 18:44:12.078055 3609 log.go:172] (0xc00003a0b0) (0xc000b7c000) Create stream\nI0404 18:44:12.078062 3609 log.go:172] (0xc00003a0b0) (0xc000b7c000) Stream added, broadcasting: 3\nI0404 18:44:12.078998 3609 log.go:172] (0xc00003a0b0) Reply frame received for 3\nI0404 18:44:12.079043 3609 log.go:172] (0xc00003a0b0) (0xc000bc6000) Create stream\nI0404 18:44:12.079059 3609 log.go:172] (0xc00003a0b0) (0xc000bc6000) Stream added, broadcasting: 5\nI0404 18:44:12.079884 3609 log.go:172] (0xc00003a0b0) Reply frame received for 5\nI0404 18:44:12.165859 3609 log.go:172] (0xc00003a0b0) Data frame received for 3\nI0404 18:44:12.165885 3609 log.go:172] (0xc000b7c000) (3) Data frame handling\nI0404 18:44:12.165900 3609 log.go:172] (0xc000b7c000) (3) Data frame sent\nI0404 18:44:12.165951 3609 log.go:172] (0xc00003a0b0) Data frame received for 5\nI0404 18:44:12.165973 3609 log.go:172] (0xc00003a0b0) Data frame received for 3\nI0404 18:44:12.166011 3609 log.go:172] (0xc000b7c000) (3) Data frame handling\nI0404 18:44:12.166029 3609 log.go:172] (0xc000bc6000) (5) Data frame handling\nI0404 18:44:12.166040 3609 log.go:172] (0xc000bc6000) (5) Data frame sent\nI0404 18:44:12.166047 3609 log.go:172] (0xc00003a0b0) Data frame received for 5\nI0404 18:44:12.166054 3609 log.go:172] (0xc000bc6000) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0404 18:44:12.167081 3609 log.go:172] (0xc00003a0b0) Data frame received for 1\nI0404 18:44:12.167094 3609 log.go:172] (0xc0003700a0) (1) Data frame handling\nI0404 18:44:12.167101 3609 log.go:172] (0xc0003700a0) (1) Data frame sent\nI0404 18:44:12.167109 3609 log.go:172] (0xc00003a0b0) (0xc0003700a0) Stream removed, broadcasting: 1\nI0404 18:44:12.167119 3609 log.go:172] (0xc00003a0b0) Go away received\nI0404 18:44:12.167544 3609 log.go:172] (0xc00003a0b0) (0xc0003700a0) Stream removed, broadcasting: 1\nI0404 18:44:12.167570 3609 log.go:172] (0xc00003a0b0) (0xc000b7c000) Stream removed, broadcasting: 3\nI0404 18:44:12.167583 3609 log.go:172] (0xc00003a0b0) (0xc000bc6000) Stream removed, broadcasting: 5\n" Apr 4 18:44:12.171: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 4 18:44:12.171: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 4 18:44:12.171: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9073 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 4 18:44:12.425: INFO: stderr: "I0404 18:44:12.317442 3629 log.go:172] (0xc000948210) (0xc00092e0a0) Create stream\nI0404 18:44:12.317491 3629 log.go:172] (0xc000948210) (0xc00092e0a0) Stream added, broadcasting: 1\nI0404 18:44:12.319878 3629 log.go:172] (0xc000948210) Reply frame received for 1\nI0404 18:44:12.319931 3629 log.go:172] (0xc000948210) (0xc000670fa0) Create stream\nI0404 18:44:12.319946 3629 log.go:172] (0xc000948210) (0xc000670fa0) Stream added, broadcasting: 3\nI0404 18:44:12.320934 3629 log.go:172] (0xc000948210) Reply frame received for 3\nI0404 18:44:12.320984 3629 log.go:172] (0xc000948210) (0xc00044a000) Create stream\nI0404 18:44:12.320997 3629 log.go:172] (0xc000948210) (0xc00044a000) Stream added, broadcasting: 5\nI0404 18:44:12.322099 3629 log.go:172] (0xc000948210) Reply frame received for 5\nI0404 18:44:12.380393 3629 log.go:172] (0xc000948210) Data frame received for 5\nI0404 18:44:12.380414 3629 log.go:172] (0xc00044a000) (5) Data frame handling\nI0404 18:44:12.380426 3629 log.go:172] (0xc00044a000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0404 18:44:12.416987 3629 log.go:172] (0xc000948210) Data frame received for 5\nI0404 18:44:12.417019 3629 log.go:172] (0xc00044a000) (5) Data frame handling\nI0404 18:44:12.418058 3629 log.go:172] (0xc000948210) Data frame received for 3\nI0404 18:44:12.418092 3629 log.go:172] (0xc000670fa0) (3) Data frame handling\nI0404 18:44:12.418122 3629 log.go:172] (0xc000670fa0) (3) Data frame sent\nI0404 18:44:12.418631 3629 log.go:172] (0xc000948210) Data frame received for 3\nI0404 18:44:12.418649 3629 log.go:172] (0xc000670fa0) (3) Data frame handling\nI0404 18:44:12.421055 3629 log.go:172] (0xc000948210) Data frame received for 1\nI0404 18:44:12.421072 3629 log.go:172] (0xc00092e0a0) (1) Data frame handling\nI0404 18:44:12.421083 3629 log.go:172] (0xc00092e0a0) (1) Data frame sent\nI0404 18:44:12.421204 3629 log.go:172] (0xc000948210) (0xc00092e0a0) Stream removed, broadcasting: 1\nI0404 18:44:12.421230 3629 log.go:172] (0xc000948210) Go away received\nI0404 18:44:12.421494 3629 log.go:172] (0xc000948210) (0xc00092e0a0) Stream removed, broadcasting: 1\nI0404 18:44:12.421506 3629 log.go:172] (0xc000948210) (0xc000670fa0) Stream removed, broadcasting: 3\nI0404 18:44:12.421512 3629 log.go:172] (0xc000948210) (0xc00044a000) Stream removed, broadcasting: 5\n" Apr 4 18:44:12.425: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 4 18:44:12.425: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 4 18:44:12.425: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9073 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 4 18:44:12.670: INFO: stderr: "I0404 18:44:12.549811 3651 log.go:172] (0xc000bd6f20) (0xc000bbc3c0) Create stream\nI0404 18:44:12.549863 3651 log.go:172] (0xc000bd6f20) (0xc000bbc3c0) Stream added, broadcasting: 1\nI0404 18:44:12.553045 3651 log.go:172] (0xc000bd6f20) Reply frame received for 1\nI0404 18:44:12.553106 3651 log.go:172] (0xc000bd6f20) (0xc000ada280) Create stream\nI0404 18:44:12.553317 3651 log.go:172] (0xc000bd6f20) (0xc000ada280) Stream added, broadcasting: 3\nI0404 18:44:12.554404 3651 log.go:172] (0xc000bd6f20) Reply frame received for 3\nI0404 18:44:12.554441 3651 log.go:172] (0xc000bd6f20) (0xc000bbc460) Create stream\nI0404 18:44:12.554464 3651 log.go:172] (0xc000bd6f20) (0xc000bbc460) Stream added, broadcasting: 5\nI0404 18:44:12.555533 3651 log.go:172] (0xc000bd6f20) Reply frame received for 5\nI0404 18:44:12.618239 3651 log.go:172] (0xc000bd6f20) Data frame received for 5\nI0404 18:44:12.618269 3651 log.go:172] (0xc000bbc460) (5) Data frame handling\nI0404 18:44:12.618293 3651 log.go:172] (0xc000bbc460) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0404 18:44:12.662171 3651 log.go:172] (0xc000bd6f20) Data frame received for 3\nI0404 18:44:12.662212 3651 log.go:172] (0xc000ada280) (3) Data frame handling\nI0404 18:44:12.662241 3651 log.go:172] (0xc000ada280) (3) Data frame sent\nI0404 18:44:12.662259 3651 log.go:172] (0xc000bd6f20) Data frame received for 3\nI0404 18:44:12.662276 3651 log.go:172] (0xc000ada280) (3) Data frame handling\nI0404 18:44:12.662453 3651 log.go:172] (0xc000bd6f20) Data frame received for 5\nI0404 18:44:12.662482 3651 log.go:172] (0xc000bbc460) (5) Data frame handling\nI0404 18:44:12.664425 3651 log.go:172] (0xc000bd6f20) Data frame received for 1\nI0404 18:44:12.664457 3651 log.go:172] (0xc000bbc3c0) (1) Data frame handling\nI0404 18:44:12.664481 3651 log.go:172] (0xc000bbc3c0) (1) Data frame sent\nI0404 18:44:12.664517 3651 log.go:172] (0xc000bd6f20) (0xc000bbc3c0) Stream removed, broadcasting: 1\nI0404 18:44:12.664586 3651 log.go:172] (0xc000bd6f20) Go away received\nI0404 18:44:12.664935 3651 log.go:172] (0xc000bd6f20) (0xc000bbc3c0) Stream removed, broadcasting: 1\nI0404 18:44:12.664957 3651 log.go:172] (0xc000bd6f20) (0xc000ada280) Stream removed, broadcasting: 3\nI0404 18:44:12.664967 3651 log.go:172] (0xc000bd6f20) (0xc000bbc460) Stream removed, broadcasting: 5\n" Apr 4 18:44:12.671: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 4 18:44:12.671: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 4 18:44:12.671: INFO: Waiting for statefulset status.replicas updated to 0 Apr 4 18:44:12.674: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Apr 4 18:44:22.682: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 4 18:44:22.682: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Apr 4 18:44:22.682: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Apr 4 18:44:22.697: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999069s Apr 4 18:44:23.702: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.992446147s Apr 4 18:44:24.708: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.987164827s Apr 4 18:44:25.711: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.981769198s Apr 4 18:44:26.716: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.978100138s Apr 4 18:44:27.721: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.973491503s Apr 4 18:44:28.726: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.968169694s Apr 4 18:44:29.730: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.963372487s Apr 4 18:44:30.734: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.959567701s Apr 4 18:44:31.738: INFO: Verifying statefulset ss doesn't scale past 3 for another 955.130983ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-9073 Apr 4 18:44:32.741: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9073 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 4 18:44:32.945: INFO: stderr: "I0404 18:44:32.859969 3671 log.go:172] (0xc0003c8790) (0xc00064a3c0) Create stream\nI0404 18:44:32.860011 3671 log.go:172] (0xc0003c8790) (0xc00064a3c0) Stream added, broadcasting: 1\nI0404 18:44:32.861784 3671 log.go:172] (0xc0003c8790) Reply frame received for 1\nI0404 18:44:32.861824 3671 log.go:172] (0xc0003c8790) (0xc0002de0a0) Create stream\nI0404 18:44:32.861837 3671 log.go:172] (0xc0003c8790) (0xc0002de0a0) Stream added, broadcasting: 3\nI0404 18:44:32.862604 3671 log.go:172] (0xc0003c8790) Reply frame received for 3\nI0404 18:44:32.862632 3671 log.go:172] (0xc0003c8790) (0xc00064a460) Create stream\nI0404 18:44:32.862646 3671 log.go:172] (0xc0003c8790) (0xc00064a460) Stream added, broadcasting: 5\nI0404 18:44:32.863308 3671 log.go:172] (0xc0003c8790) Reply frame received for 5\nI0404 18:44:32.938948 3671 log.go:172] (0xc0003c8790) Data frame received for 5\nI0404 18:44:32.938972 3671 log.go:172] (0xc00064a460) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0404 18:44:32.938994 3671 log.go:172] (0xc0003c8790) Data frame received for 3\nI0404 18:44:32.939050 3671 log.go:172] (0xc0002de0a0) (3) Data frame handling\nI0404 18:44:32.939094 3671 log.go:172] (0xc0002de0a0) (3) Data frame sent\nI0404 18:44:32.939128 3671 log.go:172] (0xc0003c8790) Data frame received for 3\nI0404 18:44:32.939154 3671 log.go:172] (0xc0002de0a0) (3) Data frame handling\nI0404 18:44:32.939198 3671 log.go:172] (0xc00064a460) (5) Data frame sent\nI0404 18:44:32.939225 3671 log.go:172] (0xc0003c8790) Data frame received for 5\nI0404 18:44:32.939249 3671 log.go:172] (0xc00064a460) (5) Data frame handling\nI0404 18:44:32.940402 3671 log.go:172] (0xc0003c8790) Data frame received for 1\nI0404 18:44:32.940434 3671 log.go:172] (0xc00064a3c0) (1) Data frame handling\nI0404 18:44:32.940454 3671 log.go:172] (0xc00064a3c0) (1) Data frame sent\nI0404 18:44:32.940474 3671 log.go:172] (0xc0003c8790) (0xc00064a3c0) Stream removed, broadcasting: 1\nI0404 18:44:32.940498 3671 log.go:172] (0xc0003c8790) Go away received\nI0404 18:44:32.940883 3671 log.go:172] (0xc0003c8790) (0xc00064a3c0) Stream removed, broadcasting: 1\nI0404 18:44:32.940911 3671 log.go:172] (0xc0003c8790) (0xc0002de0a0) Stream removed, broadcasting: 3\nI0404 18:44:32.940924 3671 log.go:172] (0xc0003c8790) (0xc00064a460) Stream removed, broadcasting: 5\n" Apr 4 18:44:32.945: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 4 18:44:32.945: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 4 18:44:32.946: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9073 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 4 18:44:33.131: INFO: stderr: "I0404 18:44:33.064488 3691 log.go:172] (0xc00003bb80) (0xc0009340a0) Create stream\nI0404 18:44:33.064526 3691 log.go:172] (0xc00003bb80) (0xc0009340a0) Stream added, broadcasting: 1\nI0404 18:44:33.066660 3691 log.go:172] (0xc00003bb80) Reply frame received for 1\nI0404 18:44:33.066696 3691 log.go:172] (0xc00003bb80) (0xc000944000) Create stream\nI0404 18:44:33.066711 3691 log.go:172] (0xc00003bb80) (0xc000944000) Stream added, broadcasting: 3\nI0404 18:44:33.067394 3691 log.go:172] (0xc00003bb80) Reply frame received for 3\nI0404 18:44:33.067409 3691 log.go:172] (0xc00003bb80) (0xc000657400) Create stream\nI0404 18:44:33.067414 3691 log.go:172] (0xc00003bb80) (0xc000657400) Stream added, broadcasting: 5\nI0404 18:44:33.067995 3691 log.go:172] (0xc00003bb80) Reply frame received for 5\nI0404 18:44:33.126381 3691 log.go:172] (0xc00003bb80) Data frame received for 3\nI0404 18:44:33.126409 3691 log.go:172] (0xc000944000) (3) Data frame handling\nI0404 18:44:33.126426 3691 log.go:172] (0xc000944000) (3) Data frame sent\nI0404 18:44:33.126433 3691 log.go:172] (0xc00003bb80) Data frame received for 3\nI0404 18:44:33.126439 3691 log.go:172] (0xc000944000) (3) Data frame handling\nI0404 18:44:33.126506 3691 log.go:172] (0xc00003bb80) Data frame received for 5\nI0404 18:44:33.126535 3691 log.go:172] (0xc000657400) (5) Data frame handling\nI0404 18:44:33.126569 3691 log.go:172] (0xc000657400) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0404 18:44:33.126677 3691 log.go:172] (0xc00003bb80) Data frame received for 5\nI0404 18:44:33.126690 3691 log.go:172] (0xc000657400) (5) Data frame handling\nI0404 18:44:33.128072 3691 log.go:172] (0xc00003bb80) Data frame received for 1\nI0404 18:44:33.128089 3691 log.go:172] (0xc0009340a0) (1) Data frame handling\nI0404 18:44:33.128106 3691 log.go:172] (0xc0009340a0) (1) Data frame sent\nI0404 18:44:33.128115 3691 log.go:172] (0xc00003bb80) (0xc0009340a0) Stream removed, broadcasting: 1\nI0404 18:44:33.128217 3691 log.go:172] (0xc00003bb80) Go away received\nI0404 18:44:33.128557 3691 log.go:172] (0xc00003bb80) (0xc0009340a0) Stream removed, broadcasting: 1\nI0404 18:44:33.128592 3691 log.go:172] (0xc00003bb80) (0xc000944000) Stream removed, broadcasting: 3\nI0404 18:44:33.128612 3691 log.go:172] (0xc00003bb80) (0xc000657400) Stream removed, broadcasting: 5\n" Apr 4 18:44:33.132: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 4 18:44:33.132: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 4 18:44:33.132: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9073 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 4 18:44:33.381: INFO: rc: 1 Apr 4 18:44:33.381: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9073 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: I0404 18:44:33.298686 3712 log.go:172] (0xc00003a420) (0xc00060d400) Create stream I0404 18:44:33.298738 3712 log.go:172] (0xc00003a420) (0xc00060d400) Stream added, broadcasting: 1 I0404 18:44:33.301249 3712 log.go:172] (0xc00003a420) Reply frame received for 1 I0404 18:44:33.301293 3712 log.go:172] (0xc00003a420) (0xc000508a00) Create stream I0404 18:44:33.301307 3712 log.go:172] (0xc00003a420) (0xc000508a00) Stream added, broadcasting: 3 I0404 18:44:33.302205 3712 log.go:172] (0xc00003a420) Reply frame received for 3 I0404 18:44:33.302243 3712 log.go:172] (0xc00003a420) (0xc000508aa0) Create stream I0404 18:44:33.302253 3712 log.go:172] (0xc00003a420) (0xc000508aa0) Stream added, broadcasting: 5 I0404 18:44:33.303121 3712 log.go:172] (0xc00003a420) Reply frame received for 5 I0404 18:44:33.375726 3712 log.go:172] (0xc00003a420) Data frame received for 1 I0404 18:44:33.375792 3712 log.go:172] (0xc00003a420) (0xc000508a00) Stream removed, broadcasting: 3 I0404 18:44:33.375838 3712 log.go:172] (0xc00060d400) (1) Data frame handling I0404 18:44:33.375854 3712 log.go:172] (0xc00060d400) (1) Data frame sent I0404 18:44:33.375874 3712 log.go:172] (0xc00003a420) (0xc000508aa0) Stream removed, broadcasting: 5 I0404 18:44:33.375892 3712 log.go:172] (0xc00003a420) (0xc00060d400) Stream removed, broadcasting: 1 I0404 18:44:33.375915 3712 log.go:172] (0xc00003a420) Go away received I0404 18:44:33.376387 3712 log.go:172] (0xc00003a420) (0xc00060d400) Stream removed, broadcasting: 1 I0404 18:44:33.376418 3712 log.go:172] (0xc00003a420) (0xc000508a00) Stream removed, broadcasting: 3 I0404 18:44:33.376428 3712 log.go:172] (0xc00003a420) (0xc000508aa0) Stream removed, broadcasting: 5 error: Internal error occurred: error executing command in container: failed to exec in container: failed to start exec "424ec0e2d5709c43dbb0e732dacf5e6be8defb5b22ae930cb9859546605b4950": OCI runtime exec failed: exec failed: container_linux.go:346: starting container process caused "read init-p: connection reset by peer": unknown error: exit status 1 Apr 4 18:44:43.381: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9073 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 4 18:44:43.477: INFO: rc: 1 Apr 4 18:44:43.477: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9073 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 4 18:44:53.477: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9073 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 4 18:44:53.575: INFO: rc: 1 Apr 4 18:44:53.575: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9073 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 4 18:45:03.575: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9073 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 4 18:45:03.681: INFO: rc: 1 Apr 4 18:45:03.681: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9073 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 4 18:45:13.682: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9073 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 4 18:45:13.793: INFO: rc: 1 Apr 4 18:45:13.793: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9073 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 4 18:45:23.793: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9073 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 4 18:45:23.888: INFO: rc: 1 Apr 4 18:45:23.888: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9073 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 4 18:45:33.888: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9073 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 4 18:45:33.972: INFO: rc: 1 Apr 4 18:45:33.972: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9073 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 4 18:45:43.972: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9073 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 4 18:45:44.076: INFO: rc: 1 Apr 4 18:45:44.077: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9073 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 4 18:45:54.077: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9073 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 4 18:45:54.171: INFO: rc: 1 Apr 4 18:45:54.171: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9073 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 4 18:46:04.172: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9073 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 4 18:46:04.268: INFO: rc: 1 Apr 4 18:46:04.268: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9073 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 4 18:46:14.269: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9073 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 4 18:46:14.361: INFO: rc: 1 Apr 4 18:46:14.361: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9073 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 4 18:46:24.362: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9073 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 4 18:46:24.455: INFO: rc: 1 Apr 4 18:46:24.455: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9073 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 4 18:46:34.456: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9073 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 4 18:46:34.539: INFO: rc: 1 Apr 4 18:46:34.539: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9073 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 4 18:46:44.539: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9073 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 4 18:46:44.618: INFO: rc: 1 Apr 4 18:46:44.618: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9073 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 4 18:46:54.618: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9073 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 4 18:46:54.713: INFO: rc: 1 Apr 4 18:46:54.713: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9073 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 4 18:47:04.713: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9073 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 4 18:47:04.798: INFO: rc: 1 Apr 4 18:47:04.798: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9073 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 4 18:47:14.798: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9073 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 4 18:47:14.892: INFO: rc: 1 Apr 4 18:47:14.892: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9073 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 4 18:47:24.892: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9073 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 4 18:47:24.986: INFO: rc: 1 Apr 4 18:47:24.986: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9073 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 4 18:47:34.987: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9073 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 4 18:47:35.087: INFO: rc: 1 Apr 4 18:47:35.087: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9073 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 4 18:47:45.087: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9073 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 4 18:47:45.185: INFO: rc: 1 Apr 4 18:47:45.185: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9073 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 4 18:47:55.185: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9073 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 4 18:47:55.280: INFO: rc: 1 Apr 4 18:47:55.280: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9073 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 4 18:48:05.280: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9073 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 4 18:48:05.377: INFO: rc: 1 Apr 4 18:48:05.377: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9073 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 4 18:48:15.378: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9073 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 4 18:48:15.482: INFO: rc: 1 Apr 4 18:48:15.482: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9073 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 4 18:48:25.482: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9073 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 4 18:48:25.598: INFO: rc: 1 Apr 4 18:48:25.598: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9073 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 4 18:48:35.598: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9073 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 4 18:48:35.693: INFO: rc: 1 Apr 4 18:48:35.693: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9073 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 4 18:48:45.693: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9073 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 4 18:48:45.800: INFO: rc: 1 Apr 4 18:48:45.800: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9073 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 4 18:48:55.800: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9073 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 4 18:48:55.875: INFO: rc: 1 Apr 4 18:48:55.875: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9073 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 4 18:49:05.876: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9073 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 4 18:49:05.970: INFO: rc: 1 Apr 4 18:49:05.970: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9073 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 4 18:49:15.970: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9073 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 4 18:49:16.067: INFO: rc: 1 Apr 4 18:49:16.067: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9073 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 4 18:49:26.068: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9073 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 4 18:49:26.160: INFO: rc: 1 Apr 4 18:49:26.160: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9073 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Apr 4 18:49:36.161: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9073 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 4 18:49:36.257: INFO: rc: 1 Apr 4 18:49:36.258: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: Apr 4 18:49:36.258: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Apr 4 18:49:36.268: INFO: Deleting all statefulset in ns statefulset-9073 Apr 4 18:49:36.270: INFO: Scaling statefulset ss to 0 Apr 4 18:49:36.278: INFO: Waiting for statefulset status.replicas updated to 0 Apr 4 18:49:36.280: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:49:36.350: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-9073" for this suite. • [SLOW TEST:367.920 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":281,"completed":246,"skipped":4184,"failed":0} [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:49:36.356: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-9353da9a-f3b2-4a86-8d2a-ebae29acd059 STEP: Creating a pod to test consume secrets Apr 4 18:49:36.415: INFO: Waiting up to 5m0s for pod "pod-secrets-ed67f7c7-a0fc-40a8-a0d2-c6c42c61f02d" in namespace "secrets-563" to be "Succeeded or Failed" Apr 4 18:49:36.419: INFO: Pod "pod-secrets-ed67f7c7-a0fc-40a8-a0d2-c6c42c61f02d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.304424ms Apr 4 18:49:38.423: INFO: Pod "pod-secrets-ed67f7c7-a0fc-40a8-a0d2-c6c42c61f02d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008397289s Apr 4 18:49:40.427: INFO: Pod "pod-secrets-ed67f7c7-a0fc-40a8-a0d2-c6c42c61f02d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012803159s Apr 4 18:49:42.430: INFO: Pod "pod-secrets-ed67f7c7-a0fc-40a8-a0d2-c6c42c61f02d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.015237437s Apr 4 18:49:44.434: INFO: Pod "pod-secrets-ed67f7c7-a0fc-40a8-a0d2-c6c42c61f02d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.019640244s STEP: Saw pod success Apr 4 18:49:44.434: INFO: Pod "pod-secrets-ed67f7c7-a0fc-40a8-a0d2-c6c42c61f02d" satisfied condition "Succeeded or Failed" Apr 4 18:49:44.438: INFO: Trying to get logs from node latest-worker pod pod-secrets-ed67f7c7-a0fc-40a8-a0d2-c6c42c61f02d container secret-volume-test: STEP: delete the pod Apr 4 18:49:44.484: INFO: Waiting for pod pod-secrets-ed67f7c7-a0fc-40a8-a0d2-c6c42c61f02d to disappear Apr 4 18:49:44.503: INFO: Pod pod-secrets-ed67f7c7-a0fc-40a8-a0d2-c6c42c61f02d no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:49:44.503: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-563" for this suite. • [SLOW TEST:8.156 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":281,"completed":247,"skipped":4184,"failed":0} SS ------------------------------ [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:49:44.513: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:249 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating Agnhost RC Apr 4 18:49:44.623: INFO: namespace kubectl-2579 Apr 4 18:49:44.623: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2579' Apr 4 18:49:45.146: INFO: stderr: "" Apr 4 18:49:45.146: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Apr 4 18:49:46.216: INFO: Selector matched 1 pods for map[app:agnhost] Apr 4 18:49:46.216: INFO: Found 0 / 1 Apr 4 18:49:47.150: INFO: Selector matched 1 pods for map[app:agnhost] Apr 4 18:49:47.150: INFO: Found 0 / 1 Apr 4 18:49:48.150: INFO: Selector matched 1 pods for map[app:agnhost] Apr 4 18:49:48.150: INFO: Found 1 / 1 Apr 4 18:49:48.150: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Apr 4 18:49:48.154: INFO: Selector matched 1 pods for map[app:agnhost] Apr 4 18:49:48.154: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Apr 4 18:49:48.154: INFO: wait on agnhost-master startup in kubectl-2579 Apr 4 18:49:48.154: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config logs agnhost-master-c8smp agnhost-master --namespace=kubectl-2579' Apr 4 18:49:48.272: INFO: stderr: "" Apr 4 18:49:48.272: INFO: stdout: "Paused\n" STEP: exposing RC Apr 4 18:49:48.272: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-2579' Apr 4 18:49:48.400: INFO: stderr: "" Apr 4 18:49:48.400: INFO: stdout: "service/rm2 exposed\n" Apr 4 18:49:48.405: INFO: Service rm2 in namespace kubectl-2579 found. STEP: exposing service Apr 4 18:49:50.411: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-2579' Apr 4 18:49:50.546: INFO: stderr: "" Apr 4 18:49:50.546: INFO: stdout: "service/rm3 exposed\n" Apr 4 18:49:50.562: INFO: Service rm3 in namespace kubectl-2579 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:49:52.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2579" for this suite. • [SLOW TEST:8.066 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1149 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":281,"completed":248,"skipped":4186,"failed":0} SS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:49:52.579: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Apr 4 18:49:53.202: INFO: Waiting up to 5m0s for pod "downwardapi-volume-317d8848-5e82-42ed-9157-2c4cc8a303b8" in namespace "downward-api-8357" to be "Succeeded or Failed" Apr 4 18:49:53.220: INFO: Pod "downwardapi-volume-317d8848-5e82-42ed-9157-2c4cc8a303b8": Phase="Pending", Reason="", readiness=false. Elapsed: 18.189052ms Apr 4 18:49:55.264: INFO: Pod "downwardapi-volume-317d8848-5e82-42ed-9157-2c4cc8a303b8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062273782s Apr 4 18:49:57.276: INFO: Pod "downwardapi-volume-317d8848-5e82-42ed-9157-2c4cc8a303b8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.073916016s STEP: Saw pod success Apr 4 18:49:57.276: INFO: Pod "downwardapi-volume-317d8848-5e82-42ed-9157-2c4cc8a303b8" satisfied condition "Succeeded or Failed" Apr 4 18:49:57.279: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-317d8848-5e82-42ed-9157-2c4cc8a303b8 container client-container: STEP: delete the pod Apr 4 18:49:57.326: INFO: Waiting for pod downwardapi-volume-317d8848-5e82-42ed-9157-2c4cc8a303b8 to disappear Apr 4 18:49:57.342: INFO: Pod downwardapi-volume-317d8848-5e82-42ed-9157-2c4cc8a303b8 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:49:57.342: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8357" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":281,"completed":249,"skipped":4188,"failed":0} SSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:49:57.350: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:249 [BeforeEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1318 STEP: creating an pod Apr 4 18:49:57.409: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config run logs-generator --image=us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 --namespace=kubectl-5452 -- logs-generator --log-lines-total 100 --run-duration 20s' Apr 4 18:49:57.518: INFO: stderr: "" Apr 4 18:49:57.518: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Waiting for log generator to start. Apr 4 18:49:57.518: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] Apr 4 18:49:57.518: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-5452" to be "running and ready, or succeeded" Apr 4 18:49:57.527: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 8.488413ms Apr 4 18:49:59.839: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.320631893s Apr 4 18:50:01.847: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 4.328645611s Apr 4 18:50:03.851: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 6.332343086s Apr 4 18:50:03.851: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" Apr 4 18:50:03.851: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings Apr 4 18:50:03.851: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-5452' Apr 4 18:50:03.959: INFO: stderr: "" Apr 4 18:50:03.959: INFO: stdout: "I0404 18:50:01.837008 1 logs_generator.go:76] 0 PUT /api/v1/namespaces/ns/pods/qqlx 517\nI0404 18:50:02.037213 1 logs_generator.go:76] 1 POST /api/v1/namespaces/ns/pods/bk2c 528\nI0404 18:50:02.237345 1 logs_generator.go:76] 2 GET /api/v1/namespaces/default/pods/pt28 489\nI0404 18:50:02.437323 1 logs_generator.go:76] 3 GET /api/v1/namespaces/kube-system/pods/2qx 293\nI0404 18:50:02.637397 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/default/pods/t25 385\nI0404 18:50:02.837290 1 logs_generator.go:76] 5 PUT /api/v1/namespaces/ns/pods/76b 414\nI0404 18:50:03.037346 1 logs_generator.go:76] 6 PUT /api/v1/namespaces/ns/pods/bsz 420\nI0404 18:50:03.237300 1 logs_generator.go:76] 7 POST /api/v1/namespaces/ns/pods/6skn 586\nI0404 18:50:03.437391 1 logs_generator.go:76] 8 GET /api/v1/namespaces/ns/pods/nttp 205\nI0404 18:50:03.637278 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/default/pods/gcpt 216\nI0404 18:50:03.837300 1 logs_generator.go:76] 10 PUT /api/v1/namespaces/ns/pods/5lft 287\n" STEP: limiting log lines Apr 4 18:50:03.959: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-5452 --tail=1' Apr 4 18:50:04.073: INFO: stderr: "" Apr 4 18:50:04.073: INFO: stdout: "I0404 18:50:04.037188 1 logs_generator.go:76] 11 GET /api/v1/namespaces/ns/pods/4q95 503\n" Apr 4 18:50:04.073: INFO: got output "I0404 18:50:04.037188 1 logs_generator.go:76] 11 GET /api/v1/namespaces/ns/pods/4q95 503\n" STEP: limiting log bytes Apr 4 18:50:04.073: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-5452 --limit-bytes=1' Apr 4 18:50:04.171: INFO: stderr: "" Apr 4 18:50:04.171: INFO: stdout: "I" Apr 4 18:50:04.171: INFO: got output "I" STEP: exposing timestamps Apr 4 18:50:04.171: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-5452 --tail=1 --timestamps' Apr 4 18:50:04.260: INFO: stderr: "" Apr 4 18:50:04.260: INFO: stdout: "2020-04-04T18:50:04.237334689Z I0404 18:50:04.237213 1 logs_generator.go:76] 12 GET /api/v1/namespaces/kube-system/pods/jnw9 328\n" Apr 4 18:50:04.260: INFO: got output "2020-04-04T18:50:04.237334689Z I0404 18:50:04.237213 1 logs_generator.go:76] 12 GET /api/v1/namespaces/kube-system/pods/jnw9 328\n" STEP: restricting to a time range Apr 4 18:50:06.760: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-5452 --since=1s' Apr 4 18:50:06.866: INFO: stderr: "" Apr 4 18:50:06.866: INFO: stdout: "I0404 18:50:06.037295 1 logs_generator.go:76] 21 PUT /api/v1/namespaces/default/pods/ld9 416\nI0404 18:50:06.237284 1 logs_generator.go:76] 22 GET /api/v1/namespaces/ns/pods/86lx 332\nI0404 18:50:06.437247 1 logs_generator.go:76] 23 PUT /api/v1/namespaces/kube-system/pods/rksn 231\nI0404 18:50:06.637281 1 logs_generator.go:76] 24 POST /api/v1/namespaces/kube-system/pods/r7f 381\nI0404 18:50:06.837196 1 logs_generator.go:76] 25 PUT /api/v1/namespaces/ns/pods/l89w 223\n" Apr 4 18:50:06.866: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-5452 --since=24h' Apr 4 18:50:06.963: INFO: stderr: "" Apr 4 18:50:06.963: INFO: stdout: "I0404 18:50:01.837008 1 logs_generator.go:76] 0 PUT /api/v1/namespaces/ns/pods/qqlx 517\nI0404 18:50:02.037213 1 logs_generator.go:76] 1 POST /api/v1/namespaces/ns/pods/bk2c 528\nI0404 18:50:02.237345 1 logs_generator.go:76] 2 GET /api/v1/namespaces/default/pods/pt28 489\nI0404 18:50:02.437323 1 logs_generator.go:76] 3 GET /api/v1/namespaces/kube-system/pods/2qx 293\nI0404 18:50:02.637397 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/default/pods/t25 385\nI0404 18:50:02.837290 1 logs_generator.go:76] 5 PUT /api/v1/namespaces/ns/pods/76b 414\nI0404 18:50:03.037346 1 logs_generator.go:76] 6 PUT /api/v1/namespaces/ns/pods/bsz 420\nI0404 18:50:03.237300 1 logs_generator.go:76] 7 POST /api/v1/namespaces/ns/pods/6skn 586\nI0404 18:50:03.437391 1 logs_generator.go:76] 8 GET /api/v1/namespaces/ns/pods/nttp 205\nI0404 18:50:03.637278 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/default/pods/gcpt 216\nI0404 18:50:03.837300 1 logs_generator.go:76] 10 PUT /api/v1/namespaces/ns/pods/5lft 287\nI0404 18:50:04.037188 1 logs_generator.go:76] 11 GET /api/v1/namespaces/ns/pods/4q95 503\nI0404 18:50:04.237213 1 logs_generator.go:76] 12 GET /api/v1/namespaces/kube-system/pods/jnw9 328\nI0404 18:50:04.437234 1 logs_generator.go:76] 13 GET /api/v1/namespaces/ns/pods/kzl 338\nI0404 18:50:04.637309 1 logs_generator.go:76] 14 GET /api/v1/namespaces/ns/pods/994 402\nI0404 18:50:04.837260 1 logs_generator.go:76] 15 GET /api/v1/namespaces/ns/pods/2wsq 592\nI0404 18:50:05.037176 1 logs_generator.go:76] 16 POST /api/v1/namespaces/ns/pods/qfx 591\nI0404 18:50:05.237222 1 logs_generator.go:76] 17 POST /api/v1/namespaces/default/pods/95wz 566\nI0404 18:50:05.437249 1 logs_generator.go:76] 18 POST /api/v1/namespaces/ns/pods/g97 525\nI0404 18:50:05.637276 1 logs_generator.go:76] 19 PUT /api/v1/namespaces/ns/pods/hmm 518\nI0404 18:50:05.837270 1 logs_generator.go:76] 20 GET /api/v1/namespaces/default/pods/thz 361\nI0404 18:50:06.037295 1 logs_generator.go:76] 21 PUT /api/v1/namespaces/default/pods/ld9 416\nI0404 18:50:06.237284 1 logs_generator.go:76] 22 GET /api/v1/namespaces/ns/pods/86lx 332\nI0404 18:50:06.437247 1 logs_generator.go:76] 23 PUT /api/v1/namespaces/kube-system/pods/rksn 231\nI0404 18:50:06.637281 1 logs_generator.go:76] 24 POST /api/v1/namespaces/kube-system/pods/r7f 381\nI0404 18:50:06.837196 1 logs_generator.go:76] 25 PUT /api/v1/namespaces/ns/pods/l89w 223\n" [AfterEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1324 Apr 4 18:50:06.963: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-5452' Apr 4 18:50:13.005: INFO: stderr: "" Apr 4 18:50:13.005: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:50:13.005: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5452" for this suite. • [SLOW TEST:15.664 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1314 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":281,"completed":250,"skipped":4196,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:50:13.015: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:50:30.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3264" for this suite. • [SLOW TEST:17.075 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":281,"completed":251,"skipped":4208,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:50:30.091: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Apr 4 18:50:30.144: INFO: Waiting up to 5m0s for pod "downwardapi-volume-58c94ce7-2214-4cdd-8f5d-245be0a47c27" in namespace "projected-1381" to be "Succeeded or Failed" Apr 4 18:50:30.148: INFO: Pod "downwardapi-volume-58c94ce7-2214-4cdd-8f5d-245be0a47c27": Phase="Pending", Reason="", readiness=false. Elapsed: 3.915739ms Apr 4 18:50:32.151: INFO: Pod "downwardapi-volume-58c94ce7-2214-4cdd-8f5d-245be0a47c27": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007136108s Apr 4 18:50:34.156: INFO: Pod "downwardapi-volume-58c94ce7-2214-4cdd-8f5d-245be0a47c27": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011406731s Apr 4 18:50:36.159: INFO: Pod "downwardapi-volume-58c94ce7-2214-4cdd-8f5d-245be0a47c27": Phase="Running", Reason="", readiness=true. Elapsed: 6.015073621s Apr 4 18:50:38.163: INFO: Pod "downwardapi-volume-58c94ce7-2214-4cdd-8f5d-245be0a47c27": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.018314269s STEP: Saw pod success Apr 4 18:50:38.163: INFO: Pod "downwardapi-volume-58c94ce7-2214-4cdd-8f5d-245be0a47c27" satisfied condition "Succeeded or Failed" Apr 4 18:50:38.165: INFO: Trying to get logs from node latest-worker2 pod downwardapi-volume-58c94ce7-2214-4cdd-8f5d-245be0a47c27 container client-container: STEP: delete the pod Apr 4 18:50:38.194: INFO: Waiting for pod downwardapi-volume-58c94ce7-2214-4cdd-8f5d-245be0a47c27 to disappear Apr 4 18:50:38.211: INFO: Pod downwardapi-volume-58c94ce7-2214-4cdd-8f5d-245be0a47c27 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:50:38.211: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1381" for this suite. • [SLOW TEST:8.126 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":281,"completed":252,"skipped":4246,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:50:38.217: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:249 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Apr 4 18:50:38.262: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config version' Apr 4 18:50:38.394: INFO: stderr: "" Apr 4 18:50:38.394: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"19+\", GitVersion:\"v1.19.0-alpha.1.325+47f5d2923f3f35\", GitCommit:\"47f5d2923f3f35adc66b4797a95720f67c948b4e\", GitTreeState:\"clean\", BuildDate:\"2020-04-04T16:45:34Z\", GoVersion:\"go1.13.9\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.0\", GitCommit:\"70132b0f130acc0bed193d9ba59dd186f0e634cf\", GitTreeState:\"clean\", BuildDate:\"2020-01-14T00:09:19Z\", GoVersion:\"go1.13.4\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:50:38.394: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3816" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":281,"completed":253,"skipped":4271,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:50:38.418: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-map-642f0272-0596-4d92-aee7-524bd1026f40 STEP: Creating a pod to test consume configMaps Apr 4 18:50:38.487: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-cf659f6c-e2a1-4347-99b8-619afe5b2bac" in namespace "projected-5736" to be "Succeeded or Failed" Apr 4 18:50:38.501: INFO: Pod "pod-projected-configmaps-cf659f6c-e2a1-4347-99b8-619afe5b2bac": Phase="Pending", Reason="", readiness=false. Elapsed: 13.888667ms Apr 4 18:50:40.504: INFO: Pod "pod-projected-configmaps-cf659f6c-e2a1-4347-99b8-619afe5b2bac": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016800317s Apr 4 18:50:42.507: INFO: Pod "pod-projected-configmaps-cf659f6c-e2a1-4347-99b8-619afe5b2bac": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019762044s STEP: Saw pod success Apr 4 18:50:42.507: INFO: Pod "pod-projected-configmaps-cf659f6c-e2a1-4347-99b8-619afe5b2bac" satisfied condition "Succeeded or Failed" Apr 4 18:50:42.510: INFO: Trying to get logs from node latest-worker pod pod-projected-configmaps-cf659f6c-e2a1-4347-99b8-619afe5b2bac container projected-configmap-volume-test: STEP: delete the pod Apr 4 18:50:42.523: INFO: Waiting for pod pod-projected-configmaps-cf659f6c-e2a1-4347-99b8-619afe5b2bac to disappear Apr 4 18:50:42.534: INFO: Pod pod-projected-configmaps-cf659f6c-e2a1-4347-99b8-619afe5b2bac no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:50:42.534: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5736" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":281,"completed":254,"skipped":4309,"failed":0} SS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:50:42.538: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on tmpfs Apr 4 18:50:42.607: INFO: Waiting up to 5m0s for pod "pod-8333aba6-b007-4b6b-8bf1-382a03f271e9" in namespace "emptydir-2621" to be "Succeeded or Failed" Apr 4 18:50:42.625: INFO: Pod "pod-8333aba6-b007-4b6b-8bf1-382a03f271e9": Phase="Pending", Reason="", readiness=false. Elapsed: 18.274387ms Apr 4 18:50:44.629: INFO: Pod "pod-8333aba6-b007-4b6b-8bf1-382a03f271e9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021423836s Apr 4 18:50:46.632: INFO: Pod "pod-8333aba6-b007-4b6b-8bf1-382a03f271e9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024966615s STEP: Saw pod success Apr 4 18:50:46.632: INFO: Pod "pod-8333aba6-b007-4b6b-8bf1-382a03f271e9" satisfied condition "Succeeded or Failed" Apr 4 18:50:46.635: INFO: Trying to get logs from node latest-worker pod pod-8333aba6-b007-4b6b-8bf1-382a03f271e9 container test-container: STEP: delete the pod Apr 4 18:50:46.653: INFO: Waiting for pod pod-8333aba6-b007-4b6b-8bf1-382a03f271e9 to disappear Apr 4 18:50:46.658: INFO: Pod pod-8333aba6-b007-4b6b-8bf1-382a03f271e9 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:50:46.658: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2621" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":281,"completed":255,"skipped":4311,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:50:46.664: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 4 18:50:47.309: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 4 18:50:49.316: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721623047, loc:(*time.Location)(0x7bcb460)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721623047, loc:(*time.Location)(0x7bcb460)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721623047, loc:(*time.Location)(0x7bcb460)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721623047, loc:(*time.Location)(0x7bcb460)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 4 18:50:52.331: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:50:52.464: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6971" for this suite. STEP: Destroying namespace "webhook-6971-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.890 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":281,"completed":256,"skipped":4323,"failed":0} [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:50:52.554: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Apr 4 18:50:52.631: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:51:00.291: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-3878" for this suite. • [SLOW TEST:7.742 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48 listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":281,"completed":257,"skipped":4323,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:51:00.297: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Apr 4 18:51:00.403: INFO: Creating ReplicaSet my-hostname-basic-33971724-532f-4469-b60b-2db993565412 Apr 4 18:51:00.413: INFO: Pod name my-hostname-basic-33971724-532f-4469-b60b-2db993565412: Found 0 pods out of 1 Apr 4 18:51:05.438: INFO: Pod name my-hostname-basic-33971724-532f-4469-b60b-2db993565412: Found 1 pods out of 1 Apr 4 18:51:05.438: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-33971724-532f-4469-b60b-2db993565412" is running Apr 4 18:51:05.440: INFO: Pod "my-hostname-basic-33971724-532f-4469-b60b-2db993565412-6lmhc" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-04 18:51:00 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-04 18:51:03 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-04 18:51:03 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-04 18:51:00 +0000 UTC Reason: Message:}]) Apr 4 18:51:05.441: INFO: Trying to dial the pod Apr 4 18:51:10.452: INFO: Controller my-hostname-basic-33971724-532f-4469-b60b-2db993565412: Got expected result from replica 1 [my-hostname-basic-33971724-532f-4469-b60b-2db993565412-6lmhc]: "my-hostname-basic-33971724-532f-4469-b60b-2db993565412-6lmhc", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:51:10.452: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-1944" for this suite. • [SLOW TEST:10.164 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":281,"completed":258,"skipped":4339,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:51:10.461: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91 Apr 4 18:51:10.512: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 4 18:51:10.540: INFO: Waiting for terminating namespaces to be deleted... Apr 4 18:51:10.558: INFO: Logging pods the kubelet thinks is on node latest-worker before test Apr 4 18:51:10.563: INFO: kindnet-vnjgh from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 4 18:51:10.563: INFO: Container kindnet-cni ready: true, restart count 0 Apr 4 18:51:10.563: INFO: kube-proxy-s9v6p from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 4 18:51:10.563: INFO: Container kube-proxy ready: true, restart count 0 Apr 4 18:51:10.563: INFO: my-hostname-basic-33971724-532f-4469-b60b-2db993565412-6lmhc from replicaset-1944 started at 2020-04-04 18:51:00 +0000 UTC (1 container statuses recorded) Apr 4 18:51:10.563: INFO: Container my-hostname-basic-33971724-532f-4469-b60b-2db993565412 ready: true, restart count 0 Apr 4 18:51:10.563: INFO: Logging pods the kubelet thinks is on node latest-worker2 before test Apr 4 18:51:10.567: INFO: kindnet-zq6gp from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 4 18:51:10.567: INFO: Container kindnet-cni ready: true, restart count 0 Apr 4 18:51:10.567: INFO: kube-proxy-c5xlk from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 4 18:51:10.567: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.1602b27d6eb05e09], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:51:11.587: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-1838" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":281,"completed":259,"skipped":4364,"failed":0} ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:51:11.594: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 4 18:51:12.238: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 4 18:51:14.244: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721623072, loc:(*time.Location)(0x7bcb460)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721623072, loc:(*time.Location)(0x7bcb460)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721623072, loc:(*time.Location)(0x7bcb460)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721623072, loc:(*time.Location)(0x7bcb460)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 4 18:51:16.248: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721623072, loc:(*time.Location)(0x7bcb460)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721623072, loc:(*time.Location)(0x7bcb460)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721623072, loc:(*time.Location)(0x7bcb460)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721623072, loc:(*time.Location)(0x7bcb460)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 4 18:51:19.281: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:51:19.355: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4940" for this suite. STEP: Destroying namespace "webhook-4940-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.893 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":281,"completed":260,"skipped":4364,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:51:19.487: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Apr 4 18:51:19.549: INFO: Waiting up to 5m0s for pod "downwardapi-volume-cfc0c75e-417c-49c4-9691-81e156ec3682" in namespace "downward-api-4728" to be "Succeeded or Failed" Apr 4 18:51:19.552: INFO: Pod "downwardapi-volume-cfc0c75e-417c-49c4-9691-81e156ec3682": Phase="Pending", Reason="", readiness=false. Elapsed: 2.540017ms Apr 4 18:51:21.556: INFO: Pod "downwardapi-volume-cfc0c75e-417c-49c4-9691-81e156ec3682": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006305868s Apr 4 18:51:23.559: INFO: Pod "downwardapi-volume-cfc0c75e-417c-49c4-9691-81e156ec3682": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009678028s STEP: Saw pod success Apr 4 18:51:23.559: INFO: Pod "downwardapi-volume-cfc0c75e-417c-49c4-9691-81e156ec3682" satisfied condition "Succeeded or Failed" Apr 4 18:51:23.561: INFO: Trying to get logs from node latest-worker pod downwardapi-volume-cfc0c75e-417c-49c4-9691-81e156ec3682 container client-container: STEP: delete the pod Apr 4 18:51:23.592: INFO: Waiting for pod downwardapi-volume-cfc0c75e-417c-49c4-9691-81e156ec3682 to disappear Apr 4 18:51:23.595: INFO: Pod downwardapi-volume-cfc0c75e-417c-49c4-9691-81e156ec3682 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:51:23.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4728" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":281,"completed":261,"skipped":4383,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should patch a secret [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:51:23.602: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a secret [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a secret STEP: listing secrets in all namespaces to ensure that there are more than zero STEP: patching the secret STEP: deleting the secret using a LabelSelector STEP: listing secrets in all namespaces, searching for label name and value in patch [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:51:23.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1504" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should patch a secret [Conformance]","total":281,"completed":262,"skipped":4416,"failed":0} SSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:51:23.693: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:51:27.850: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-3040" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":281,"completed":263,"skipped":4423,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:51:27.872: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91 Apr 4 18:51:28.118: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 4 18:51:28.126: INFO: Waiting for terminating namespaces to be deleted... Apr 4 18:51:28.128: INFO: Logging pods the kubelet thinks is on node latest-worker before test Apr 4 18:51:28.131: INFO: kube-proxy-s9v6p from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 4 18:51:28.131: INFO: Container kube-proxy ready: true, restart count 0 Apr 4 18:51:28.131: INFO: kindnet-vnjgh from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 4 18:51:28.131: INFO: Container kindnet-cni ready: true, restart count 0 Apr 4 18:51:28.131: INFO: Logging pods the kubelet thinks is on node latest-worker2 before test Apr 4 18:51:28.136: INFO: kindnet-zq6gp from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 4 18:51:28.136: INFO: Container kindnet-cni ready: true, restart count 0 Apr 4 18:51:28.136: INFO: kube-proxy-c5xlk from kube-system started at 2020-03-15 18:28:07 +0000 UTC (1 container statuses recorded) Apr 4 18:51:28.136: INFO: Container kube-proxy ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: verifying the node has the label node latest-worker STEP: verifying the node has the label node latest-worker2 Apr 4 18:51:28.282: INFO: Pod kindnet-vnjgh requesting resource cpu=100m on Node latest-worker Apr 4 18:51:28.282: INFO: Pod kindnet-zq6gp requesting resource cpu=100m on Node latest-worker2 Apr 4 18:51:28.282: INFO: Pod kube-proxy-c5xlk requesting resource cpu=0m on Node latest-worker2 Apr 4 18:51:28.282: INFO: Pod kube-proxy-s9v6p requesting resource cpu=0m on Node latest-worker STEP: Starting Pods to consume most of the cluster CPU. Apr 4 18:51:28.282: INFO: Creating a pod which consumes cpu=11130m on Node latest-worker Apr 4 18:51:28.286: INFO: Creating a pod which consumes cpu=11130m on Node latest-worker2 STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-1949d33f-c947-4fc0-8b9b-32da22f53243.1602b2818e468df7], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4652/filler-pod-1949d33f-c947-4fc0-8b9b-32da22f53243 to latest-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-1949d33f-c947-4fc0-8b9b-32da22f53243.1602b281d75c3315], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-1949d33f-c947-4fc0-8b9b-32da22f53243.1602b28223c5dab8], Reason = [Created], Message = [Created container filler-pod-1949d33f-c947-4fc0-8b9b-32da22f53243] STEP: Considering event: Type = [Normal], Name = [filler-pod-1949d33f-c947-4fc0-8b9b-32da22f53243.1602b2823d1ed0a6], Reason = [Started], Message = [Started container filler-pod-1949d33f-c947-4fc0-8b9b-32da22f53243] STEP: Considering event: Type = [Normal], Name = [filler-pod-e87ba583-17bc-40b5-b669-4912007d76a9.1602b2818fa1000b], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4652/filler-pod-e87ba583-17bc-40b5-b669-4912007d76a9 to latest-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-e87ba583-17bc-40b5-b669-4912007d76a9.1602b28211f186e9], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-e87ba583-17bc-40b5-b669-4912007d76a9.1602b2824603a178], Reason = [Created], Message = [Created container filler-pod-e87ba583-17bc-40b5-b669-4912007d76a9] STEP: Considering event: Type = [Normal], Name = [filler-pod-e87ba583-17bc-40b5-b669-4912007d76a9.1602b2825731f641], Reason = [Started], Message = [Started container filler-pod-e87ba583-17bc-40b5-b669-4912007d76a9] STEP: Considering event: Type = [Warning], Name = [additional-pod.1602b2827f002323], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node latest-worker2 STEP: verifying the node doesn't have the label node STEP: removing the label node off the node latest-worker STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:51:33.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-4652" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82 • [SLOW TEST:5.518 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":281,"completed":264,"skipped":4444,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:51:33.390: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Apr 4 18:51:41.526: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 4 18:51:41.577: INFO: Pod pod-with-prestop-http-hook still exists Apr 4 18:51:43.577: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 4 18:51:43.580: INFO: Pod pod-with-prestop-http-hook still exists Apr 4 18:51:45.577: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 4 18:51:45.579: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:51:45.583: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-4914" for this suite. • [SLOW TEST:12.198 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":281,"completed":265,"skipped":4466,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:51:45.588: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-1603 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 4 18:51:45.659: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Apr 4 18:51:45.699: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 4 18:51:47.704: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 4 18:51:49.702: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 4 18:51:51.798: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 4 18:51:53.702: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 4 18:51:55.703: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 4 18:51:57.702: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 4 18:51:59.702: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 4 18:52:01.702: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 4 18:52:03.703: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 4 18:52:05.702: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 4 18:52:07.703: INFO: The status of Pod netserver-0 is Running (Ready = true) Apr 4 18:52:07.709: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Apr 4 18:52:17.747: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.107:8080/dial?request=hostname&protocol=http&host=10.244.2.37&port=8080&tries=1'] Namespace:pod-network-test-1603 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 4 18:52:17.747: INFO: >>> kubeConfig: /root/.kube/config I0404 18:52:17.775181 7 log.go:172] (0xc002670630) (0xc001de46e0) Create stream I0404 18:52:17.775204 7 log.go:172] (0xc002670630) (0xc001de46e0) Stream added, broadcasting: 1 I0404 18:52:17.776614 7 log.go:172] (0xc002670630) Reply frame received for 1 I0404 18:52:17.776645 7 log.go:172] (0xc002670630) (0xc00108fea0) Create stream I0404 18:52:17.776658 7 log.go:172] (0xc002670630) (0xc00108fea0) Stream added, broadcasting: 3 I0404 18:52:17.777579 7 log.go:172] (0xc002670630) Reply frame received for 3 I0404 18:52:17.777618 7 log.go:172] (0xc002670630) (0xc0013279a0) Create stream I0404 18:52:17.777633 7 log.go:172] (0xc002670630) (0xc0013279a0) Stream added, broadcasting: 5 I0404 18:52:17.778468 7 log.go:172] (0xc002670630) Reply frame received for 5 I0404 18:52:17.842417 7 log.go:172] (0xc002670630) Data frame received for 3 I0404 18:52:17.842452 7 log.go:172] (0xc00108fea0) (3) Data frame handling I0404 18:52:17.842470 7 log.go:172] (0xc00108fea0) (3) Data frame sent I0404 18:52:17.842772 7 log.go:172] (0xc002670630) Data frame received for 3 I0404 18:52:17.842799 7 log.go:172] (0xc00108fea0) (3) Data frame handling I0404 18:52:17.843074 7 log.go:172] (0xc002670630) Data frame received for 5 I0404 18:52:17.843114 7 log.go:172] (0xc0013279a0) (5) Data frame handling I0404 18:52:17.844670 7 log.go:172] (0xc002670630) Data frame received for 1 I0404 18:52:17.844703 7 log.go:172] (0xc001de46e0) (1) Data frame handling I0404 18:52:17.844719 7 log.go:172] (0xc001de46e0) (1) Data frame sent I0404 18:52:17.844732 7 log.go:172] (0xc002670630) (0xc001de46e0) Stream removed, broadcasting: 1 I0404 18:52:17.844745 7 log.go:172] (0xc002670630) Go away received I0404 18:52:17.844895 7 log.go:172] (0xc002670630) (0xc001de46e0) Stream removed, broadcasting: 1 I0404 18:52:17.844920 7 log.go:172] (0xc002670630) (0xc00108fea0) Stream removed, broadcasting: 3 I0404 18:52:17.844936 7 log.go:172] (0xc002670630) (0xc0013279a0) Stream removed, broadcasting: 5 Apr 4 18:52:17.844: INFO: Waiting for responses: map[] Apr 4 18:52:17.848: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.107:8080/dial?request=hostname&protocol=http&host=10.244.1.106&port=8080&tries=1'] Namespace:pod-network-test-1603 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 4 18:52:17.848: INFO: >>> kubeConfig: /root/.kube/config I0404 18:52:17.875027 7 log.go:172] (0xc002670d10) (0xc001de4c80) Create stream I0404 18:52:17.875053 7 log.go:172] (0xc002670d10) (0xc001de4c80) Stream added, broadcasting: 1 I0404 18:52:17.876634 7 log.go:172] (0xc002670d10) Reply frame received for 1 I0404 18:52:17.876668 7 log.go:172] (0xc002670d10) (0xc001de4d20) Create stream I0404 18:52:17.876682 7 log.go:172] (0xc002670d10) (0xc001de4d20) Stream added, broadcasting: 3 I0404 18:52:17.877518 7 log.go:172] (0xc002670d10) Reply frame received for 3 I0404 18:52:17.877547 7 log.go:172] (0xc002670d10) (0xc001de4e60) Create stream I0404 18:52:17.877557 7 log.go:172] (0xc002670d10) (0xc001de4e60) Stream added, broadcasting: 5 I0404 18:52:17.878484 7 log.go:172] (0xc002670d10) Reply frame received for 5 I0404 18:52:17.944474 7 log.go:172] (0xc002670d10) Data frame received for 3 I0404 18:52:17.944508 7 log.go:172] (0xc001de4d20) (3) Data frame handling I0404 18:52:17.944529 7 log.go:172] (0xc001de4d20) (3) Data frame sent I0404 18:52:17.944661 7 log.go:172] (0xc002670d10) Data frame received for 5 I0404 18:52:17.944671 7 log.go:172] (0xc001de4e60) (5) Data frame handling I0404 18:52:17.944763 7 log.go:172] (0xc002670d10) Data frame received for 3 I0404 18:52:17.944779 7 log.go:172] (0xc001de4d20) (3) Data frame handling I0404 18:52:17.946044 7 log.go:172] (0xc002670d10) Data frame received for 1 I0404 18:52:17.946088 7 log.go:172] (0xc001de4c80) (1) Data frame handling I0404 18:52:17.946117 7 log.go:172] (0xc001de4c80) (1) Data frame sent I0404 18:52:17.946143 7 log.go:172] (0xc002670d10) (0xc001de4c80) Stream removed, broadcasting: 1 I0404 18:52:17.946177 7 log.go:172] (0xc002670d10) Go away received I0404 18:52:17.946222 7 log.go:172] (0xc002670d10) (0xc001de4c80) Stream removed, broadcasting: 1 I0404 18:52:17.946240 7 log.go:172] (0xc002670d10) (0xc001de4d20) Stream removed, broadcasting: 3 I0404 18:52:17.946250 7 log.go:172] (0xc002670d10) (0xc001de4e60) Stream removed, broadcasting: 5 Apr 4 18:52:17.946: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:52:17.946: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-1603" for this suite. • [SLOW TEST:32.364 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":281,"completed":266,"skipped":4479,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:52:17.953: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 4 18:52:18.597: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 4 18:52:20.835: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721623138, loc:(*time.Location)(0x7bcb460)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721623138, loc:(*time.Location)(0x7bcb460)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721623138, loc:(*time.Location)(0x7bcb460)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721623138, loc:(*time.Location)(0x7bcb460)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 4 18:52:22.847: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721623138, loc:(*time.Location)(0x7bcb460)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721623138, loc:(*time.Location)(0x7bcb460)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721623138, loc:(*time.Location)(0x7bcb460)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721623138, loc:(*time.Location)(0x7bcb460)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 4 18:52:26.021: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Apr 4 18:52:26.024: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-3692-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:52:27.233: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9104" for this suite. STEP: Destroying namespace "webhook-9104-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.627 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":281,"completed":267,"skipped":4503,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:52:27.580: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Apr 4 18:52:28.729: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Apr 4 18:52:30.829: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721623148, loc:(*time.Location)(0x7bcb460)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721623148, loc:(*time.Location)(0x7bcb460)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721623148, loc:(*time.Location)(0x7bcb460)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721623148, loc:(*time.Location)(0x7bcb460)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-54c8b67c75\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 4 18:52:32.842: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721623148, loc:(*time.Location)(0x7bcb460)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721623148, loc:(*time.Location)(0x7bcb460)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721623148, loc:(*time.Location)(0x7bcb460)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721623148, loc:(*time.Location)(0x7bcb460)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-54c8b67c75\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 4 18:52:35.868: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Apr 4 18:52:35.871: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:52:37.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-4025" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:9.628 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":281,"completed":268,"skipped":4509,"failed":0} SSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:52:37.208: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Apr 4 18:52:37.371: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR Apr 4 18:52:38.023: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-04-04T18:52:37Z generation:1 name:name1 resourceVersion:5416607 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:aa1f9ece-facf-4943-807a-04a36ce56ccb] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR Apr 4 18:52:48.031: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-04-04T18:52:48Z generation:1 name:name2 resourceVersion:5416649 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:263785ca-93a7-421f-a11a-680080c86c0a] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR Apr 4 18:52:58.036: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-04-04T18:52:37Z generation:2 name:name1 resourceVersion:5416679 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:aa1f9ece-facf-4943-807a-04a36ce56ccb] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR Apr 4 18:53:08.040: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-04-04T18:52:48Z generation:2 name:name2 resourceVersion:5416709 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:263785ca-93a7-421f-a11a-680080c86c0a] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR Apr 4 18:53:18.061: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-04-04T18:52:37Z generation:2 name:name1 resourceVersion:5416739 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:aa1f9ece-facf-4943-807a-04a36ce56ccb] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR Apr 4 18:53:28.072: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-04-04T18:52:48Z generation:2 name:name2 resourceVersion:5416765 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:263785ca-93a7-421f-a11a-680080c86c0a] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:53:38.583: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-3311" for this suite. • [SLOW TEST:61.407 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:42 watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":281,"completed":269,"skipped":4514,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Lease lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:53:38.616: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:53:38.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-2622" for this suite. •{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":281,"completed":270,"skipped":4533,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:53:38.910: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-map-6af37e77-5714-4054-b48e-deb9e854c571 STEP: Creating a pod to test consume secrets Apr 4 18:53:38.997: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-e9d51226-8f1f-400d-a137-3c344d8985d6" in namespace "projected-7163" to be "Succeeded or Failed" Apr 4 18:53:39.015: INFO: Pod "pod-projected-secrets-e9d51226-8f1f-400d-a137-3c344d8985d6": Phase="Pending", Reason="", readiness=false. Elapsed: 17.768941ms Apr 4 18:53:41.019: INFO: Pod "pod-projected-secrets-e9d51226-8f1f-400d-a137-3c344d8985d6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022117391s Apr 4 18:53:43.023: INFO: Pod "pod-projected-secrets-e9d51226-8f1f-400d-a137-3c344d8985d6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026038587s STEP: Saw pod success Apr 4 18:53:43.023: INFO: Pod "pod-projected-secrets-e9d51226-8f1f-400d-a137-3c344d8985d6" satisfied condition "Succeeded or Failed" Apr 4 18:53:43.026: INFO: Trying to get logs from node latest-worker pod pod-projected-secrets-e9d51226-8f1f-400d-a137-3c344d8985d6 container projected-secret-volume-test: STEP: delete the pod Apr 4 18:53:43.110: INFO: Waiting for pod pod-projected-secrets-e9d51226-8f1f-400d-a137-3c344d8985d6 to disappear Apr 4 18:53:43.119: INFO: Pod pod-projected-secrets-e9d51226-8f1f-400d-a137-3c344d8985d6 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:53:43.119: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7163" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":281,"completed":271,"skipped":4539,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:53:43.127: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:135 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Apr 4 18:53:43.219: INFO: Create a RollingUpdate DaemonSet Apr 4 18:53:43.222: INFO: Check that daemon pods launch on every node of the cluster Apr 4 18:53:43.238: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 18:53:43.264: INFO: Number of nodes with available pods: 0 Apr 4 18:53:43.264: INFO: Node latest-worker is running more than one daemon pod Apr 4 18:53:44.269: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 18:53:44.273: INFO: Number of nodes with available pods: 0 Apr 4 18:53:44.273: INFO: Node latest-worker is running more than one daemon pod Apr 4 18:53:45.322: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 18:53:45.674: INFO: Number of nodes with available pods: 0 Apr 4 18:53:45.674: INFO: Node latest-worker is running more than one daemon pod Apr 4 18:53:46.269: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 18:53:46.272: INFO: Number of nodes with available pods: 0 Apr 4 18:53:46.272: INFO: Node latest-worker is running more than one daemon pod Apr 4 18:53:47.364: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 18:53:47.368: INFO: Number of nodes with available pods: 1 Apr 4 18:53:47.368: INFO: Node latest-worker is running more than one daemon pod Apr 4 18:53:48.283: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 18:53:48.333: INFO: Number of nodes with available pods: 1 Apr 4 18:53:48.333: INFO: Node latest-worker is running more than one daemon pod Apr 4 18:53:49.270: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 18:53:49.273: INFO: Number of nodes with available pods: 2 Apr 4 18:53:49.273: INFO: Number of running nodes: 2, number of available pods: 2 Apr 4 18:53:49.273: INFO: Update the DaemonSet to trigger a rollout Apr 4 18:53:49.280: INFO: Updating DaemonSet daemon-set Apr 4 18:54:04.466: INFO: Roll back the DaemonSet before rollout is complete Apr 4 18:54:04.471: INFO: Updating DaemonSet daemon-set Apr 4 18:54:04.472: INFO: Make sure DaemonSet rollback is complete Apr 4 18:54:04.509: INFO: Wrong image for pod: daemon-set-cph4x. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Apr 4 18:54:04.509: INFO: Pod daemon-set-cph4x is not available Apr 4 18:54:04.670: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 18:54:05.674: INFO: Wrong image for pod: daemon-set-cph4x. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Apr 4 18:54:05.674: INFO: Pod daemon-set-cph4x is not available Apr 4 18:54:05.677: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 18:54:06.675: INFO: Wrong image for pod: daemon-set-cph4x. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Apr 4 18:54:06.675: INFO: Pod daemon-set-cph4x is not available Apr 4 18:54:06.679: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 18:54:07.675: INFO: Wrong image for pod: daemon-set-cph4x. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Apr 4 18:54:07.675: INFO: Pod daemon-set-cph4x is not available Apr 4 18:54:07.678: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 4 18:54:08.675: INFO: Pod daemon-set-w62zm is not available Apr 4 18:54:08.679: INFO: DaemonSet pods can't tolerate node latest-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:101 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3210, will wait for the garbage collector to delete the pods Apr 4 18:54:08.743: INFO: Deleting DaemonSet.extensions daemon-set took: 5.895342ms Apr 4 18:54:09.044: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.310106ms Apr 4 18:54:12.057: INFO: Number of nodes with available pods: 0 Apr 4 18:54:12.057: INFO: Number of running nodes: 0, number of available pods: 0 Apr 4 18:54:12.059: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3210/daemonsets","resourceVersion":"5417019"},"items":null} Apr 4 18:54:12.062: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3210/pods","resourceVersion":"5417019"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:54:12.069: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-3210" for this suite. • [SLOW TEST:28.948 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":281,"completed":272,"skipped":4561,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:54:12.075: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-3451 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-3451 STEP: creating replication controller externalsvc in namespace services-3451 I0404 18:54:12.228511 7 runners.go:190] Created replication controller with name: externalsvc, namespace: services-3451, replica count: 2 I0404 18:54:15.278913 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0404 18:54:18.279154 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName Apr 4 18:54:18.306: INFO: Creating new exec pod Apr 4 18:54:22.325: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32771 --kubeconfig=/root/.kube/config exec --namespace=services-3451 execpodf6fdz -- /bin/sh -x -c nslookup clusterip-service' Apr 4 18:54:25.180: INFO: stderr: "I0404 18:54:25.105862 4590 log.go:172] (0xc00079e000) (0xc000cbc0a0) Create stream\nI0404 18:54:25.105894 4590 log.go:172] (0xc00079e000) (0xc000cbc0a0) Stream added, broadcasting: 1\nI0404 18:54:25.108204 4590 log.go:172] (0xc00079e000) Reply frame received for 1\nI0404 18:54:25.108252 4590 log.go:172] (0xc00079e000) (0xc000596f00) Create stream\nI0404 18:54:25.108268 4590 log.go:172] (0xc00079e000) (0xc000596f00) Stream added, broadcasting: 3\nI0404 18:54:25.108970 4590 log.go:172] (0xc00079e000) Reply frame received for 3\nI0404 18:54:25.108989 4590 log.go:172] (0xc00079e000) (0xc000597220) Create stream\nI0404 18:54:25.108995 4590 log.go:172] (0xc00079e000) (0xc000597220) Stream added, broadcasting: 5\nI0404 18:54:25.109869 4590 log.go:172] (0xc00079e000) Reply frame received for 5\nI0404 18:54:25.162956 4590 log.go:172] (0xc00079e000) Data frame received for 5\nI0404 18:54:25.163003 4590 log.go:172] (0xc000597220) (5) Data frame handling\nI0404 18:54:25.163045 4590 log.go:172] (0xc000597220) (5) Data frame sent\n+ nslookup clusterip-service\nI0404 18:54:25.172879 4590 log.go:172] (0xc00079e000) Data frame received for 3\nI0404 18:54:25.172910 4590 log.go:172] (0xc000596f00) (3) Data frame handling\nI0404 18:54:25.172938 4590 log.go:172] (0xc000596f00) (3) Data frame sent\nI0404 18:54:25.173933 4590 log.go:172] (0xc00079e000) Data frame received for 3\nI0404 18:54:25.173951 4590 log.go:172] (0xc000596f00) (3) Data frame handling\nI0404 18:54:25.173967 4590 log.go:172] (0xc000596f00) (3) Data frame sent\nI0404 18:54:25.174322 4590 log.go:172] (0xc00079e000) Data frame received for 3\nI0404 18:54:25.174340 4590 log.go:172] (0xc000596f00) (3) Data frame handling\nI0404 18:54:25.174469 4590 log.go:172] (0xc00079e000) Data frame received for 5\nI0404 18:54:25.174509 4590 log.go:172] (0xc000597220) (5) Data frame handling\nI0404 18:54:25.176057 4590 log.go:172] (0xc00079e000) Data frame received for 1\nI0404 18:54:25.176079 4590 log.go:172] (0xc000cbc0a0) (1) Data frame handling\nI0404 18:54:25.176092 4590 log.go:172] (0xc000cbc0a0) (1) Data frame sent\nI0404 18:54:25.176116 4590 log.go:172] (0xc00079e000) (0xc000cbc0a0) Stream removed, broadcasting: 1\nI0404 18:54:25.176138 4590 log.go:172] (0xc00079e000) Go away received\nI0404 18:54:25.176471 4590 log.go:172] (0xc00079e000) (0xc000cbc0a0) Stream removed, broadcasting: 1\nI0404 18:54:25.176489 4590 log.go:172] (0xc00079e000) (0xc000596f00) Stream removed, broadcasting: 3\nI0404 18:54:25.176503 4590 log.go:172] (0xc00079e000) (0xc000597220) Stream removed, broadcasting: 5\n" Apr 4 18:54:25.180: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-3451.svc.cluster.local\tcanonical name = externalsvc.services-3451.svc.cluster.local.\nName:\texternalsvc.services-3451.svc.cluster.local\nAddress: 10.96.31.107\n\n" STEP: deleting ReplicationController externalsvc in namespace services-3451, will wait for the garbage collector to delete the pods Apr 4 18:54:25.239: INFO: Deleting ReplicationController externalsvc took: 6.067566ms Apr 4 18:54:25.640: INFO: Terminating ReplicationController externalsvc pods took: 400.280433ms Apr 4 18:54:33.066: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:54:33.104: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3451" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:21.043 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":281,"completed":273,"skipped":4610,"failed":0} S ------------------------------ [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:54:33.118: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod liveness-5bbe132c-99ab-496b-92c5-b92120d8863a in namespace container-probe-3889 Apr 4 18:54:37.196: INFO: Started pod liveness-5bbe132c-99ab-496b-92c5-b92120d8863a in namespace container-probe-3889 STEP: checking the pod's current state and verifying that restartCount is present Apr 4 18:54:37.199: INFO: Initial restart count of pod liveness-5bbe132c-99ab-496b-92c5-b92120d8863a is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:58:37.945: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3889" for this suite. • [SLOW TEST:244.839 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":281,"completed":274,"skipped":4611,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:58:37.958: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation Apr 4 18:58:38.234: INFO: >>> kubeConfig: /root/.kube/config Apr 4 18:58:41.149: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:58:51.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6008" for this suite. • [SLOW TEST:13.624 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":281,"completed":275,"skipped":4618,"failed":0} SSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:58:51.583: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-upd-c0a72209-51e9-4c1e-b01d-dab32e2ed641 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:58:55.710: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3982" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":281,"completed":276,"skipped":4622,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:58:55.717: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-mkt54 in namespace proxy-693 I0404 18:58:55.819716 7 runners.go:190] Created replication controller with name: proxy-service-mkt54, namespace: proxy-693, replica count: 1 I0404 18:58:56.870127 7 runners.go:190] proxy-service-mkt54 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0404 18:58:57.870397 7 runners.go:190] proxy-service-mkt54 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0404 18:58:58.870679 7 runners.go:190] proxy-service-mkt54 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0404 18:58:59.870907 7 runners.go:190] proxy-service-mkt54 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0404 18:59:00.871118 7 runners.go:190] proxy-service-mkt54 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0404 18:59:01.871352 7 runners.go:190] proxy-service-mkt54 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0404 18:59:02.871563 7 runners.go:190] proxy-service-mkt54 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0404 18:59:03.871758 7 runners.go:190] proxy-service-mkt54 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0404 18:59:04.871996 7 runners.go:190] proxy-service-mkt54 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0404 18:59:05.872213 7 runners.go:190] proxy-service-mkt54 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0404 18:59:06.872445 7 runners.go:190] proxy-service-mkt54 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0404 18:59:07.872708 7 runners.go:190] proxy-service-mkt54 Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 4 18:59:07.875: INFO: setup took 12.082028397s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Apr 4 18:59:07.881: INFO: (0) /api/v1/namespaces/proxy-693/pods/proxy-service-mkt54-b799n:162/proxy/: bar (200; 5.566418ms) Apr 4 18:59:07.881: INFO: (0) /api/v1/namespaces/proxy-693/pods/proxy-service-mkt54-b799n/proxy/: test (200; 5.5537ms) Apr 4 18:59:07.881: INFO: (0) /api/v1/namespaces/proxy-693/pods/proxy-service-mkt54-b799n:160/proxy/: foo (200; 5.668099ms) Apr 4 18:59:07.882: INFO: (0) /api/v1/namespaces/proxy-693/pods/http:proxy-service-mkt54-b799n:160/proxy/: foo (200; 6.898329ms) Apr 4 18:59:07.883: INFO: (0) /api/v1/namespaces/proxy-693/services/proxy-service-mkt54:portname2/proxy/: bar (200; 7.12633ms) Apr 4 18:59:07.884: INFO: (0) /api/v1/namespaces/proxy-693/services/proxy-service-mkt54:portname1/proxy/: foo (200; 8.099038ms) Apr 4 18:59:07.884: INFO: (0) /api/v1/namespaces/proxy-693/pods/http:proxy-service-mkt54-b799n:162/proxy/: bar (200; 8.194947ms) Apr 4 18:59:07.884: INFO: (0) /api/v1/namespaces/proxy-693/services/http:proxy-service-mkt54:portname2/proxy/: bar (200; 8.213739ms) Apr 4 18:59:07.886: INFO: (0) /api/v1/namespaces/proxy-693/services/http:proxy-service-mkt54:portname1/proxy/: foo (200; 10.131909ms) Apr 4 18:59:07.886: INFO: (0) /api/v1/namespaces/proxy-693/pods/proxy-service-mkt54-b799n:1080/proxy/: testt... (200; 10.596149ms) Apr 4 18:59:07.890: INFO: (0) /api/v1/namespaces/proxy-693/pods/https:proxy-service-mkt54-b799n:443/proxy/: t... (200; 4.727193ms) Apr 4 18:59:07.895: INFO: (1) /api/v1/namespaces/proxy-693/pods/proxy-service-mkt54-b799n/proxy/: test (200; 4.735247ms) Apr 4 18:59:07.896: INFO: (1) /api/v1/namespaces/proxy-693/pods/https:proxy-service-mkt54-b799n:462/proxy/: tls qux (200; 5.292688ms) Apr 4 18:59:07.896: INFO: (1) /api/v1/namespaces/proxy-693/pods/proxy-service-mkt54-b799n:160/proxy/: foo (200; 5.371172ms) Apr 4 18:59:07.897: INFO: (1) /api/v1/namespaces/proxy-693/pods/http:proxy-service-mkt54-b799n:160/proxy/: foo (200; 5.813326ms) Apr 4 18:59:07.897: INFO: (1) /api/v1/namespaces/proxy-693/services/proxy-service-mkt54:portname1/proxy/: foo (200; 6.129471ms) Apr 4 18:59:07.897: INFO: (1) /api/v1/namespaces/proxy-693/pods/proxy-service-mkt54-b799n:1080/proxy/: testtest (200; 4.878912ms) Apr 4 18:59:07.904: INFO: (2) /api/v1/namespaces/proxy-693/pods/https:proxy-service-mkt54-b799n:462/proxy/: tls qux (200; 4.953761ms) Apr 4 18:59:07.904: INFO: (2) /api/v1/namespaces/proxy-693/pods/proxy-service-mkt54-b799n:160/proxy/: foo (200; 4.994899ms) Apr 4 18:59:07.904: INFO: (2) /api/v1/namespaces/proxy-693/pods/proxy-service-mkt54-b799n:162/proxy/: bar (200; 5.091745ms) Apr 4 18:59:07.904: INFO: (2) /api/v1/namespaces/proxy-693/pods/http:proxy-service-mkt54-b799n:1080/proxy/: t... (200; 4.980387ms) Apr 4 18:59:07.904: INFO: (2) /api/v1/namespaces/proxy-693/pods/http:proxy-service-mkt54-b799n:162/proxy/: bar (200; 5.043363ms) Apr 4 18:59:07.904: INFO: (2) /api/v1/namespaces/proxy-693/pods/proxy-service-mkt54-b799n:1080/proxy/: testtest (200; 4.812884ms) Apr 4 18:59:07.909: INFO: (3) /api/v1/namespaces/proxy-693/services/https:proxy-service-mkt54:tlsportname1/proxy/: tls baz (200; 4.739687ms) Apr 4 18:59:07.909: INFO: (3) /api/v1/namespaces/proxy-693/pods/http:proxy-service-mkt54-b799n:1080/proxy/: t... (200; 4.820577ms) Apr 4 18:59:07.909: INFO: (3) /api/v1/namespaces/proxy-693/services/proxy-service-mkt54:portname1/proxy/: foo (200; 4.784422ms) Apr 4 18:59:07.909: INFO: (3) /api/v1/namespaces/proxy-693/pods/proxy-service-mkt54-b799n:1080/proxy/: testt... (200; 3.053801ms) Apr 4 18:59:07.914: INFO: (4) /api/v1/namespaces/proxy-693/pods/https:proxy-service-mkt54-b799n:462/proxy/: tls qux (200; 3.152521ms) Apr 4 18:59:07.914: INFO: (4) /api/v1/namespaces/proxy-693/pods/proxy-service-mkt54-b799n:160/proxy/: foo (200; 3.266297ms) Apr 4 18:59:07.914: INFO: (4) /api/v1/namespaces/proxy-693/pods/proxy-service-mkt54-b799n:1080/proxy/: testtest (200; 3.442977ms) Apr 4 18:59:07.914: INFO: (4) /api/v1/namespaces/proxy-693/services/proxy-service-mkt54:portname1/proxy/: foo (200; 3.502631ms) Apr 4 18:59:07.914: INFO: (4) /api/v1/namespaces/proxy-693/services/http:proxy-service-mkt54:portname1/proxy/: foo (200; 3.749433ms) Apr 4 18:59:07.914: INFO: (4) /api/v1/namespaces/proxy-693/services/https:proxy-service-mkt54:tlsportname1/proxy/: tls baz (200; 3.832479ms) Apr 4 18:59:07.915: INFO: (4) /api/v1/namespaces/proxy-693/services/proxy-service-mkt54:portname2/proxy/: bar (200; 4.21965ms) Apr 4 18:59:07.915: INFO: (4) /api/v1/namespaces/proxy-693/services/http:proxy-service-mkt54:portname2/proxy/: bar (200; 4.348941ms) Apr 4 18:59:07.915: INFO: (4) /api/v1/namespaces/proxy-693/pods/https:proxy-service-mkt54-b799n:460/proxy/: tls baz (200; 4.490382ms) Apr 4 18:59:07.915: INFO: (4) /api/v1/namespaces/proxy-693/services/https:proxy-service-mkt54:tlsportname2/proxy/: tls qux (200; 4.398542ms) Apr 4 18:59:07.918: INFO: (5) /api/v1/namespaces/proxy-693/pods/https:proxy-service-mkt54-b799n:443/proxy/: test (200; 2.93995ms) Apr 4 18:59:07.920: INFO: (5) /api/v1/namespaces/proxy-693/services/proxy-service-mkt54:portname2/proxy/: bar (200; 4.374276ms) Apr 4 18:59:07.920: INFO: (5) /api/v1/namespaces/proxy-693/pods/http:proxy-service-mkt54-b799n:162/proxy/: bar (200; 4.434007ms) Apr 4 18:59:07.920: INFO: (5) /api/v1/namespaces/proxy-693/services/proxy-service-mkt54:portname1/proxy/: foo (200; 4.434807ms) Apr 4 18:59:07.920: INFO: (5) /api/v1/namespaces/proxy-693/services/http:proxy-service-mkt54:portname1/proxy/: foo (200; 4.455636ms) Apr 4 18:59:07.920: INFO: (5) /api/v1/namespaces/proxy-693/pods/proxy-service-mkt54-b799n:1080/proxy/: testt... (200; 4.56106ms) Apr 4 18:59:07.920: INFO: (5) /api/v1/namespaces/proxy-693/services/http:proxy-service-mkt54:portname2/proxy/: bar (200; 4.601446ms) Apr 4 18:59:07.920: INFO: (5) /api/v1/namespaces/proxy-693/pods/https:proxy-service-mkt54-b799n:460/proxy/: tls baz (200; 4.625703ms) Apr 4 18:59:07.920: INFO: (5) /api/v1/namespaces/proxy-693/services/https:proxy-service-mkt54:tlsportname1/proxy/: tls baz (200; 4.701548ms) Apr 4 18:59:07.920: INFO: (5) /api/v1/namespaces/proxy-693/pods/http:proxy-service-mkt54-b799n:160/proxy/: foo (200; 4.888397ms) Apr 4 18:59:07.920: INFO: (5) /api/v1/namespaces/proxy-693/pods/proxy-service-mkt54-b799n:160/proxy/: foo (200; 4.965366ms) Apr 4 18:59:07.924: INFO: (6) /api/v1/namespaces/proxy-693/services/proxy-service-mkt54:portname2/proxy/: bar (200; 3.861539ms) Apr 4 18:59:07.924: INFO: (6) /api/v1/namespaces/proxy-693/services/http:proxy-service-mkt54:portname1/proxy/: foo (200; 3.869815ms) Apr 4 18:59:07.924: INFO: (6) /api/v1/namespaces/proxy-693/pods/proxy-service-mkt54-b799n:1080/proxy/: testt... (200; 4.0431ms) Apr 4 18:59:07.924: INFO: (6) /api/v1/namespaces/proxy-693/pods/proxy-service-mkt54-b799n:160/proxy/: foo (200; 4.083783ms) Apr 4 18:59:07.924: INFO: (6) /api/v1/namespaces/proxy-693/pods/https:proxy-service-mkt54-b799n:443/proxy/: test (200; 4.491395ms) Apr 4 18:59:07.925: INFO: (6) /api/v1/namespaces/proxy-693/pods/http:proxy-service-mkt54-b799n:160/proxy/: foo (200; 4.559006ms) Apr 4 18:59:07.925: INFO: (6) /api/v1/namespaces/proxy-693/pods/http:proxy-service-mkt54-b799n:162/proxy/: bar (200; 4.638741ms) Apr 4 18:59:07.925: INFO: (6) /api/v1/namespaces/proxy-693/pods/https:proxy-service-mkt54-b799n:460/proxy/: tls baz (200; 4.668202ms) Apr 4 18:59:07.925: INFO: (6) /api/v1/namespaces/proxy-693/services/https:proxy-service-mkt54:tlsportname1/proxy/: tls baz (200; 4.650122ms) Apr 4 18:59:07.928: INFO: (7) /api/v1/namespaces/proxy-693/pods/proxy-service-mkt54-b799n:160/proxy/: foo (200; 2.576797ms) Apr 4 18:59:07.928: INFO: (7) /api/v1/namespaces/proxy-693/pods/https:proxy-service-mkt54-b799n:443/proxy/: t... (200; 2.593968ms) Apr 4 18:59:07.928: INFO: (7) /api/v1/namespaces/proxy-693/pods/http:proxy-service-mkt54-b799n:160/proxy/: foo (200; 2.627856ms) Apr 4 18:59:07.928: INFO: (7) /api/v1/namespaces/proxy-693/pods/https:proxy-service-mkt54-b799n:460/proxy/: tls baz (200; 2.719985ms) Apr 4 18:59:07.928: INFO: (7) /api/v1/namespaces/proxy-693/pods/proxy-service-mkt54-b799n:1080/proxy/: testtest (200; 3.10586ms) Apr 4 18:59:07.928: INFO: (7) /api/v1/namespaces/proxy-693/pods/proxy-service-mkt54-b799n:162/proxy/: bar (200; 3.30801ms) Apr 4 18:59:07.929: INFO: (7) /api/v1/namespaces/proxy-693/services/proxy-service-mkt54:portname1/proxy/: foo (200; 3.711061ms) Apr 4 18:59:07.929: INFO: (7) /api/v1/namespaces/proxy-693/services/https:proxy-service-mkt54:tlsportname2/proxy/: tls qux (200; 4.278937ms) Apr 4 18:59:07.930: INFO: (7) /api/v1/namespaces/proxy-693/services/http:proxy-service-mkt54:portname1/proxy/: foo (200; 4.573709ms) Apr 4 18:59:07.930: INFO: (7) /api/v1/namespaces/proxy-693/services/https:proxy-service-mkt54:tlsportname1/proxy/: tls baz (200; 4.609991ms) Apr 4 18:59:07.930: INFO: (7) /api/v1/namespaces/proxy-693/services/proxy-service-mkt54:portname2/proxy/: bar (200; 4.762322ms) Apr 4 18:59:07.930: INFO: (7) /api/v1/namespaces/proxy-693/services/http:proxy-service-mkt54:portname2/proxy/: bar (200; 5.415209ms) Apr 4 18:59:07.934: INFO: (8) /api/v1/namespaces/proxy-693/pods/proxy-service-mkt54-b799n:1080/proxy/: testtest (200; 4.727839ms) Apr 4 18:59:07.936: INFO: (8) /api/v1/namespaces/proxy-693/pods/http:proxy-service-mkt54-b799n:1080/proxy/: t... (200; 4.996945ms) Apr 4 18:59:07.936: INFO: (8) /api/v1/namespaces/proxy-693/pods/proxy-service-mkt54-b799n:162/proxy/: bar (200; 5.46464ms) Apr 4 18:59:07.936: INFO: (8) /api/v1/namespaces/proxy-693/pods/http:proxy-service-mkt54-b799n:162/proxy/: bar (200; 5.523266ms) Apr 4 18:59:07.936: INFO: (8) /api/v1/namespaces/proxy-693/pods/http:proxy-service-mkt54-b799n:160/proxy/: foo (200; 5.588221ms) Apr 4 18:59:07.936: INFO: (8) /api/v1/namespaces/proxy-693/services/http:proxy-service-mkt54:portname1/proxy/: foo (200; 5.613625ms) Apr 4 18:59:07.936: INFO: (8) /api/v1/namespaces/proxy-693/pods/https:proxy-service-mkt54-b799n:443/proxy/: testt... (200; 10.01484ms) Apr 4 18:59:07.947: INFO: (9) /api/v1/namespaces/proxy-693/pods/proxy-service-mkt54-b799n/proxy/: test (200; 10.025989ms) Apr 4 18:59:07.947: INFO: (9) /api/v1/namespaces/proxy-693/pods/https:proxy-service-mkt54-b799n:460/proxy/: tls baz (200; 10.053194ms) Apr 4 18:59:07.947: INFO: (9) /api/v1/namespaces/proxy-693/services/http:proxy-service-mkt54:portname2/proxy/: bar (200; 10.155732ms) Apr 4 18:59:07.947: INFO: (9) /api/v1/namespaces/proxy-693/services/http:proxy-service-mkt54:portname1/proxy/: foo (200; 10.105462ms) Apr 4 18:59:07.948: INFO: (9) /api/v1/namespaces/proxy-693/services/https:proxy-service-mkt54:tlsportname2/proxy/: tls qux (200; 10.549524ms) Apr 4 18:59:07.948: INFO: (9) /api/v1/namespaces/proxy-693/services/proxy-service-mkt54:portname1/proxy/: foo (200; 10.445922ms) Apr 4 18:59:07.949: INFO: (9) /api/v1/namespaces/proxy-693/pods/proxy-service-mkt54-b799n:160/proxy/: foo (200; 11.55957ms) Apr 4 18:59:07.949: INFO: (9) /api/v1/namespaces/proxy-693/services/https:proxy-service-mkt54:tlsportname1/proxy/: tls baz (200; 12.155596ms) Apr 4 18:59:07.952: INFO: (10) /api/v1/namespaces/proxy-693/pods/proxy-service-mkt54-b799n:160/proxy/: foo (200; 2.686551ms) Apr 4 18:59:07.952: INFO: (10) /api/v1/namespaces/proxy-693/pods/https:proxy-service-mkt54-b799n:462/proxy/: tls qux (200; 2.986981ms) Apr 4 18:59:07.956: INFO: (10) /api/v1/namespaces/proxy-693/services/proxy-service-mkt54:portname2/proxy/: bar (200; 6.267695ms) Apr 4 18:59:07.956: INFO: (10) /api/v1/namespaces/proxy-693/pods/proxy-service-mkt54-b799n/proxy/: test (200; 6.323133ms) Apr 4 18:59:07.956: INFO: (10) /api/v1/namespaces/proxy-693/pods/http:proxy-service-mkt54-b799n:162/proxy/: bar (200; 6.39484ms) Apr 4 18:59:07.956: INFO: (10) /api/v1/namespaces/proxy-693/pods/proxy-service-mkt54-b799n:1080/proxy/: testt... (200; 6.755543ms) Apr 4 18:59:07.985: INFO: (11) /api/v1/namespaces/proxy-693/pods/https:proxy-service-mkt54-b799n:443/proxy/: testt... (200; 28.866121ms) Apr 4 18:59:07.985: INFO: (11) /api/v1/namespaces/proxy-693/pods/proxy-service-mkt54-b799n/proxy/: test (200; 28.854659ms) Apr 4 18:59:07.985: INFO: (11) /api/v1/namespaces/proxy-693/pods/http:proxy-service-mkt54-b799n:162/proxy/: bar (200; 28.918378ms) Apr 4 18:59:07.987: INFO: (11) /api/v1/namespaces/proxy-693/services/proxy-service-mkt54:portname2/proxy/: bar (200; 30.013258ms) Apr 4 18:59:07.987: INFO: (11) /api/v1/namespaces/proxy-693/services/proxy-service-mkt54:portname1/proxy/: foo (200; 30.046989ms) Apr 4 18:59:07.987: INFO: (11) /api/v1/namespaces/proxy-693/services/http:proxy-service-mkt54:portname1/proxy/: foo (200; 30.131863ms) Apr 4 18:59:07.987: INFO: (11) /api/v1/namespaces/proxy-693/services/https:proxy-service-mkt54:tlsportname1/proxy/: tls baz (200; 30.13483ms) Apr 4 18:59:07.987: INFO: (11) /api/v1/namespaces/proxy-693/services/http:proxy-service-mkt54:portname2/proxy/: bar (200; 30.030974ms) Apr 4 18:59:07.987: INFO: (11) /api/v1/namespaces/proxy-693/services/https:proxy-service-mkt54:tlsportname2/proxy/: tls qux (200; 30.252818ms) Apr 4 18:59:07.990: INFO: (12) /api/v1/namespaces/proxy-693/pods/proxy-service-mkt54-b799n:162/proxy/: bar (200; 3.50976ms) Apr 4 18:59:07.991: INFO: (12) /api/v1/namespaces/proxy-693/pods/http:proxy-service-mkt54-b799n:162/proxy/: bar (200; 4.462057ms) Apr 4 18:59:07.991: INFO: (12) /api/v1/namespaces/proxy-693/pods/proxy-service-mkt54-b799n:160/proxy/: foo (200; 4.508508ms) Apr 4 18:59:07.991: INFO: (12) /api/v1/namespaces/proxy-693/pods/proxy-service-mkt54-b799n/proxy/: test (200; 4.522357ms) Apr 4 18:59:07.991: INFO: (12) /api/v1/namespaces/proxy-693/pods/http:proxy-service-mkt54-b799n:160/proxy/: foo (200; 4.568355ms) Apr 4 18:59:07.991: INFO: (12) /api/v1/namespaces/proxy-693/pods/http:proxy-service-mkt54-b799n:1080/proxy/: t... (200; 4.510834ms) Apr 4 18:59:07.991: INFO: (12) /api/v1/namespaces/proxy-693/pods/https:proxy-service-mkt54-b799n:460/proxy/: tls baz (200; 4.516357ms) Apr 4 18:59:07.991: INFO: (12) /api/v1/namespaces/proxy-693/pods/proxy-service-mkt54-b799n:1080/proxy/: testtestt... (200; 5.598884ms) Apr 4 18:59:07.999: INFO: (13) /api/v1/namespaces/proxy-693/pods/http:proxy-service-mkt54-b799n:160/proxy/: foo (200; 5.671448ms) Apr 4 18:59:07.999: INFO: (13) /api/v1/namespaces/proxy-693/pods/https:proxy-service-mkt54-b799n:443/proxy/: test (200; 5.735896ms) Apr 4 18:59:07.999: INFO: (13) /api/v1/namespaces/proxy-693/pods/http:proxy-service-mkt54-b799n:162/proxy/: bar (200; 5.903397ms) Apr 4 18:59:08.002: INFO: (14) /api/v1/namespaces/proxy-693/pods/proxy-service-mkt54-b799n:162/proxy/: bar (200; 2.709271ms) Apr 4 18:59:08.002: INFO: (14) /api/v1/namespaces/proxy-693/pods/http:proxy-service-mkt54-b799n:160/proxy/: foo (200; 2.761343ms) Apr 4 18:59:08.002: INFO: (14) /api/v1/namespaces/proxy-693/pods/https:proxy-service-mkt54-b799n:460/proxy/: tls baz (200; 2.918403ms) Apr 4 18:59:08.002: INFO: (14) /api/v1/namespaces/proxy-693/pods/https:proxy-service-mkt54-b799n:443/proxy/: test (200; 4.914752ms) Apr 4 18:59:08.004: INFO: (14) /api/v1/namespaces/proxy-693/pods/https:proxy-service-mkt54-b799n:462/proxy/: tls qux (200; 5.065078ms) Apr 4 18:59:08.004: INFO: (14) /api/v1/namespaces/proxy-693/pods/http:proxy-service-mkt54-b799n:162/proxy/: bar (200; 5.099346ms) Apr 4 18:59:08.004: INFO: (14) /api/v1/namespaces/proxy-693/pods/proxy-service-mkt54-b799n:1080/proxy/: testt... (200; 5.166892ms) Apr 4 18:59:08.005: INFO: (14) /api/v1/namespaces/proxy-693/services/proxy-service-mkt54:portname1/proxy/: foo (200; 5.821108ms) Apr 4 18:59:08.005: INFO: (14) /api/v1/namespaces/proxy-693/services/proxy-service-mkt54:portname2/proxy/: bar (200; 6.087315ms) Apr 4 18:59:08.005: INFO: (14) /api/v1/namespaces/proxy-693/services/http:proxy-service-mkt54:portname2/proxy/: bar (200; 6.082725ms) Apr 4 18:59:08.005: INFO: (14) /api/v1/namespaces/proxy-693/services/http:proxy-service-mkt54:portname1/proxy/: foo (200; 6.149932ms) Apr 4 18:59:08.005: INFO: (14) /api/v1/namespaces/proxy-693/services/https:proxy-service-mkt54:tlsportname2/proxy/: tls qux (200; 6.179003ms) Apr 4 18:59:08.005: INFO: (14) /api/v1/namespaces/proxy-693/services/https:proxy-service-mkt54:tlsportname1/proxy/: tls baz (200; 6.137916ms) Apr 4 18:59:08.007: INFO: (15) /api/v1/namespaces/proxy-693/pods/http:proxy-service-mkt54-b799n:162/proxy/: bar (200; 1.966979ms) Apr 4 18:59:08.010: INFO: (15) /api/v1/namespaces/proxy-693/pods/https:proxy-service-mkt54-b799n:443/proxy/: test (200; 4.751462ms) Apr 4 18:59:08.010: INFO: (15) /api/v1/namespaces/proxy-693/pods/proxy-service-mkt54-b799n:162/proxy/: bar (200; 4.705791ms) Apr 4 18:59:08.010: INFO: (15) /api/v1/namespaces/proxy-693/services/proxy-service-mkt54:portname2/proxy/: bar (200; 4.785855ms) Apr 4 18:59:08.010: INFO: (15) /api/v1/namespaces/proxy-693/pods/https:proxy-service-mkt54-b799n:460/proxy/: tls baz (200; 4.736081ms) Apr 4 18:59:08.010: INFO: (15) /api/v1/namespaces/proxy-693/services/http:proxy-service-mkt54:portname2/proxy/: bar (200; 4.848231ms) Apr 4 18:59:08.011: INFO: (15) /api/v1/namespaces/proxy-693/pods/proxy-service-mkt54-b799n:1080/proxy/: testt... (200; 5.367337ms) Apr 4 18:59:08.011: INFO: (15) /api/v1/namespaces/proxy-693/services/https:proxy-service-mkt54:tlsportname2/proxy/: tls qux (200; 5.401752ms) Apr 4 18:59:08.011: INFO: (15) /api/v1/namespaces/proxy-693/services/http:proxy-service-mkt54:portname1/proxy/: foo (200; 5.427806ms) Apr 4 18:59:08.011: INFO: (15) /api/v1/namespaces/proxy-693/services/https:proxy-service-mkt54:tlsportname1/proxy/: tls baz (200; 5.468926ms) Apr 4 18:59:08.015: INFO: (16) /api/v1/namespaces/proxy-693/pods/https:proxy-service-mkt54-b799n:460/proxy/: tls baz (200; 3.464084ms) Apr 4 18:59:08.015: INFO: (16) /api/v1/namespaces/proxy-693/pods/proxy-service-mkt54-b799n/proxy/: test (200; 3.554018ms) Apr 4 18:59:08.015: INFO: (16) /api/v1/namespaces/proxy-693/pods/proxy-service-mkt54-b799n:160/proxy/: foo (200; 3.582588ms) Apr 4 18:59:08.015: INFO: (16) /api/v1/namespaces/proxy-693/pods/https:proxy-service-mkt54-b799n:443/proxy/: t... (200; 3.669698ms) Apr 4 18:59:08.015: INFO: (16) /api/v1/namespaces/proxy-693/pods/http:proxy-service-mkt54-b799n:160/proxy/: foo (200; 3.60981ms) Apr 4 18:59:08.015: INFO: (16) /api/v1/namespaces/proxy-693/pods/http:proxy-service-mkt54-b799n:162/proxy/: bar (200; 3.67945ms) Apr 4 18:59:08.015: INFO: (16) /api/v1/namespaces/proxy-693/pods/proxy-service-mkt54-b799n:162/proxy/: bar (200; 3.717048ms) Apr 4 18:59:08.015: INFO: (16) /api/v1/namespaces/proxy-693/pods/https:proxy-service-mkt54-b799n:462/proxy/: tls qux (200; 3.856332ms) Apr 4 18:59:08.015: INFO: (16) /api/v1/namespaces/proxy-693/pods/proxy-service-mkt54-b799n:1080/proxy/: testtesttest (200; 4.527915ms) Apr 4 18:59:08.021: INFO: (17) /api/v1/namespaces/proxy-693/pods/proxy-service-mkt54-b799n:160/proxy/: foo (200; 4.53708ms) Apr 4 18:59:08.021: INFO: (17) /api/v1/namespaces/proxy-693/services/http:proxy-service-mkt54:portname1/proxy/: foo (200; 4.571795ms) Apr 4 18:59:08.021: INFO: (17) /api/v1/namespaces/proxy-693/services/proxy-service-mkt54:portname2/proxy/: bar (200; 4.623518ms) Apr 4 18:59:08.021: INFO: (17) /api/v1/namespaces/proxy-693/pods/http:proxy-service-mkt54-b799n:1080/proxy/: t... (200; 4.674972ms) Apr 4 18:59:08.021: INFO: (17) /api/v1/namespaces/proxy-693/pods/http:proxy-service-mkt54-b799n:160/proxy/: foo (200; 4.693372ms) Apr 4 18:59:08.021: INFO: (17) /api/v1/namespaces/proxy-693/pods/https:proxy-service-mkt54-b799n:443/proxy/: testtest (200; 3.054815ms) Apr 4 18:59:08.024: INFO: (18) /api/v1/namespaces/proxy-693/pods/https:proxy-service-mkt54-b799n:462/proxy/: tls qux (200; 3.119649ms) Apr 4 18:59:08.025: INFO: (18) /api/v1/namespaces/proxy-693/services/https:proxy-service-mkt54:tlsportname1/proxy/: tls baz (200; 3.663121ms) Apr 4 18:59:08.025: INFO: (18) /api/v1/namespaces/proxy-693/pods/proxy-service-mkt54-b799n:162/proxy/: bar (200; 3.974423ms) Apr 4 18:59:08.025: INFO: (18) /api/v1/namespaces/proxy-693/services/http:proxy-service-mkt54:portname2/proxy/: bar (200; 4.141951ms) Apr 4 18:59:08.025: INFO: (18) /api/v1/namespaces/proxy-693/services/proxy-service-mkt54:portname2/proxy/: bar (200; 4.190152ms) Apr 4 18:59:08.025: INFO: (18) /api/v1/namespaces/proxy-693/services/http:proxy-service-mkt54:portname1/proxy/: foo (200; 4.26188ms) Apr 4 18:59:08.025: INFO: (18) /api/v1/namespaces/proxy-693/pods/http:proxy-service-mkt54-b799n:1080/proxy/: t... (200; 4.231129ms) Apr 4 18:59:08.025: INFO: (18) /api/v1/namespaces/proxy-693/services/proxy-service-mkt54:portname1/proxy/: foo (200; 4.256207ms) Apr 4 18:59:08.025: INFO: (18) /api/v1/namespaces/proxy-693/pods/https:proxy-service-mkt54-b799n:443/proxy/: test (200; 7.821918ms) Apr 4 18:59:08.033: INFO: (19) /api/v1/namespaces/proxy-693/pods/https:proxy-service-mkt54-b799n:443/proxy/: testt... (200; 9.718151ms) Apr 4 18:59:08.036: INFO: (19) /api/v1/namespaces/proxy-693/pods/proxy-service-mkt54-b799n:160/proxy/: foo (200; 9.687448ms) Apr 4 18:59:08.036: INFO: (19) /api/v1/namespaces/proxy-693/services/https:proxy-service-mkt54:tlsportname1/proxy/: tls baz (200; 10.149098ms) Apr 4 18:59:08.036: INFO: (19) /api/v1/namespaces/proxy-693/pods/https:proxy-service-mkt54-b799n:462/proxy/: tls qux (200; 9.638131ms) Apr 4 18:59:08.036: INFO: (19) /api/v1/namespaces/proxy-693/pods/http:proxy-service-mkt54-b799n:160/proxy/: foo (200; 9.822825ms) Apr 4 18:59:08.036: INFO: (19) /api/v1/namespaces/proxy-693/services/http:proxy-service-mkt54:portname2/proxy/: bar (200; 10.455779ms) Apr 4 18:59:08.036: INFO: (19) /api/v1/namespaces/proxy-693/services/https:proxy-service-mkt54:tlsportname2/proxy/: tls qux (200; 10.171507ms) Apr 4 18:59:08.036: INFO: (19) /api/v1/namespaces/proxy-693/services/proxy-service-mkt54:portname1/proxy/: foo (200; 9.58961ms) Apr 4 18:59:08.036: INFO: (19) /api/v1/namespaces/proxy-693/pods/https:proxy-service-mkt54-b799n:460/proxy/: tls baz (200; 10.243068ms) Apr 4 18:59:08.036: INFO: (19) /api/v1/namespaces/proxy-693/services/http:proxy-service-mkt54:portname1/proxy/: foo (200; 10.022045ms) Apr 4 18:59:08.036: INFO: (19) /api/v1/namespaces/proxy-693/services/proxy-service-mkt54:portname2/proxy/: bar (200; 10.622286ms) STEP: deleting ReplicationController proxy-service-mkt54 in namespace proxy-693, will wait for the garbage collector to delete the pods Apr 4 18:59:08.092: INFO: Deleting ReplicationController proxy-service-mkt54 took: 4.531724ms Apr 4 18:59:08.393: INFO: Terminating ReplicationController proxy-service-mkt54 pods took: 300.364214ms [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:59:10.893: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-693" for this suite. • [SLOW TEST:15.212 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:59 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance]","total":281,"completed":277,"skipped":4635,"failed":0} SSSSS ------------------------------ [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:59:10.931: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Apr 4 18:59:11.009: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-5c53620a-f752-4228-834e-6fffd0c3902c" in namespace "security-context-test-5685" to be "Succeeded or Failed" Apr 4 18:59:11.018: INFO: Pod "alpine-nnp-false-5c53620a-f752-4228-834e-6fffd0c3902c": Phase="Pending", Reason="", readiness=false. Elapsed: 9.035246ms Apr 4 18:59:13.022: INFO: Pod "alpine-nnp-false-5c53620a-f752-4228-834e-6fffd0c3902c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012851177s Apr 4 18:59:15.026: INFO: Pod "alpine-nnp-false-5c53620a-f752-4228-834e-6fffd0c3902c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017328375s Apr 4 18:59:15.026: INFO: Pod "alpine-nnp-false-5c53620a-f752-4228-834e-6fffd0c3902c" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:59:15.048: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-5685" for this suite. •{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":281,"completed":278,"skipped":4640,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:59:15.056: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 4 18:59:15.714: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 4 18:59:17.722: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721623555, loc:(*time.Location)(0x7bcb460)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721623555, loc:(*time.Location)(0x7bcb460)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721623555, loc:(*time.Location)(0x7bcb460)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721623555, loc:(*time.Location)(0x7bcb460)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6cc9cc9dc\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 4 18:59:20.772: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:59:20.868: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6500" for this suite. STEP: Destroying namespace "webhook-6500-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.003 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":281,"completed":279,"skipped":4670,"failed":0} SSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:59:21.060: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:75 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Apr 4 18:59:21.158: INFO: Creating deployment "webserver-deployment" Apr 4 18:59:21.174: INFO: Waiting for observed generation 1 Apr 4 18:59:23.184: INFO: Waiting for all required pods to come up Apr 4 18:59:23.189: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running Apr 4 18:59:33.197: INFO: Waiting for deployment "webserver-deployment" to complete Apr 4 18:59:33.205: INFO: Updating deployment "webserver-deployment" with a non-existent image Apr 4 18:59:33.212: INFO: Updating deployment webserver-deployment Apr 4 18:59:33.212: INFO: Waiting for observed generation 2 Apr 4 18:59:35.221: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Apr 4 18:59:35.224: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Apr 4 18:59:35.226: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Apr 4 18:59:35.391: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Apr 4 18:59:35.391: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Apr 4 18:59:35.394: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Apr 4 18:59:35.398: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas Apr 4 18:59:35.398: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 Apr 4 18:59:35.404: INFO: Updating deployment webserver-deployment Apr 4 18:59:35.404: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas Apr 4 18:59:35.608: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Apr 4 18:59:35.612: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 Apr 4 18:59:36.816: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-5084 /apis/apps/v1/namespaces/deployment-5084/deployments/webserver-deployment a8bd3147-0716-482a-96e1-3e0495956bef 5418470 3 2020-04-04 18:59:21 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc005c60db8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-c7997dcc8" is progressing.,LastUpdateTime:2020-04-04 18:59:34 +0000 UTC,LastTransitionTime:2020-04-04 18:59:21 +0000 UTC,},DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-04-04 18:59:35 +0000 UTC,LastTransitionTime:2020-04-04 18:59:35 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} Apr 4 18:59:37.123: INFO: New ReplicaSet "webserver-deployment-c7997dcc8" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-c7997dcc8 deployment-5084 /apis/apps/v1/namespaces/deployment-5084/replicasets/webserver-deployment-c7997dcc8 a0b6c7fb-0c79-4d66-a4cd-2ee58cabb86c 5418524 3 2020-04-04 18:59:33 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment a8bd3147-0716-482a-96e1-3e0495956bef 0xc006c42b47 0xc006c42b48}] [] []},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: c7997dcc8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc006c42bb8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 4 18:59:37.123: INFO: All old ReplicaSets of Deployment "webserver-deployment": Apr 4 18:59:37.124: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-595b5b9587 deployment-5084 /apis/apps/v1/namespaces/deployment-5084/replicasets/webserver-deployment-595b5b9587 faaa53f8-3ac4-4845-9281-f8661f19e840 5418508 3 2020-04-04 18:59:21 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment a8bd3147-0716-482a-96e1-3e0495956bef 0xc006c42a87 0xc006c42a88}] [] []},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 595b5b9587,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc006c42ae8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} Apr 4 18:59:37.203: INFO: Pod "webserver-deployment-595b5b9587-2kxp4" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-2kxp4 webserver-deployment-595b5b9587- deployment-5084 /api/v1/namespaces/deployment-5084/pods/webserver-deployment-595b5b9587-2kxp4 29910d70-e10d-4bac-b6fc-f78582952741 5418505 0 2020-04-04 18:59:36 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 faaa53f8-3ac4-4845-9281-f8661f19e840 0xc006c430d7 0xc006c430d8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-srndk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-srndk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-srndk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 18:59:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 4 18:59:37.203: INFO: Pod "webserver-deployment-595b5b9587-4mxlq" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-4mxlq webserver-deployment-595b5b9587- deployment-5084 /api/v1/namespaces/deployment-5084/pods/webserver-deployment-595b5b9587-4mxlq f7e444b3-13ca-422e-8ebb-4829e8d6743c 5418539 0 2020-04-04 18:59:35 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 faaa53f8-3ac4-4845-9281-f8661f19e840 0xc006c431f7 0xc006c431f8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-srndk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-srndk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-srndk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 18:59:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 18:59:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 18:59:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 18:59:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-04-04 18:59:36 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 4 18:59:37.203: INFO: Pod "webserver-deployment-595b5b9587-6q2hm" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-6q2hm webserver-deployment-595b5b9587- deployment-5084 /api/v1/namespaces/deployment-5084/pods/webserver-deployment-595b5b9587-6q2hm 710ddf4e-829b-4f7d-9115-73e4c9994c80 5418498 0 2020-04-04 18:59:36 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 faaa53f8-3ac4-4845-9281-f8661f19e840 0xc006c43357 0xc006c43358}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-srndk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-srndk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-srndk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 18:59:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 4 18:59:37.203: INFO: Pod "webserver-deployment-595b5b9587-8kk5q" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-8kk5q webserver-deployment-595b5b9587- deployment-5084 /api/v1/namespaces/deployment-5084/pods/webserver-deployment-595b5b9587-8kk5q 3cede415-d20d-4d5d-ba6d-76392408469a 5418484 0 2020-04-04 18:59:36 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 faaa53f8-3ac4-4845-9281-f8661f19e840 0xc006c43477 0xc006c43478}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-srndk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-srndk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-srndk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 18:59:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 4 18:59:37.203: INFO: Pod "webserver-deployment-595b5b9587-9fsqg" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-9fsqg webserver-deployment-595b5b9587- deployment-5084 /api/v1/namespaces/deployment-5084/pods/webserver-deployment-595b5b9587-9fsqg 11d20556-6e01-4169-9cd0-89731ef1cf53 5418483 0 2020-04-04 18:59:36 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 faaa53f8-3ac4-4845-9281-f8661f19e840 0xc006c43597 0xc006c43598}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-srndk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-srndk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-srndk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 18:59:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 4 18:59:37.203: INFO: Pod "webserver-deployment-595b5b9587-9m4dm" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-9m4dm webserver-deployment-595b5b9587- deployment-5084 /api/v1/namespaces/deployment-5084/pods/webserver-deployment-595b5b9587-9m4dm f17465be-dae1-4c91-a4af-120fa04aaf5d 5418512 0 2020-04-04 18:59:36 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 faaa53f8-3ac4-4845-9281-f8661f19e840 0xc006c436c7 0xc006c436c8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-srndk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-srndk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-srndk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 18:59:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 4 18:59:37.204: INFO: Pod "webserver-deployment-595b5b9587-9wlmw" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-9wlmw webserver-deployment-595b5b9587- deployment-5084 /api/v1/namespaces/deployment-5084/pods/webserver-deployment-595b5b9587-9wlmw 3c57c33c-9af0-416a-88cf-20ef53788286 5418513 0 2020-04-04 18:59:36 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 faaa53f8-3ac4-4845-9281-f8661f19e840 0xc006c437e7 0xc006c437e8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-srndk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-srndk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-srndk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 18:59:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 4 18:59:37.204: INFO: Pod "webserver-deployment-595b5b9587-bwlhc" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-bwlhc webserver-deployment-595b5b9587- deployment-5084 /api/v1/namespaces/deployment-5084/pods/webserver-deployment-595b5b9587-bwlhc e56fbd01-3488-4254-a3d9-6cd2c6a5f157 5418342 0 2020-04-04 18:59:21 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 faaa53f8-3ac4-4845-9281-f8661f19e840 0xc006c43907 0xc006c43908}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-srndk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-srndk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-srndk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 18:59:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 18:59:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 18:59:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 18:59:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.2.46,StartTime:2020-04-04 18:59:21 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-04 18:59:27 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://69b7fb1e327a1cf36b86e20ef4378eb41a4ab9b39c8fcb13d7217904051a01f7,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.46,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 4 18:59:37.204: INFO: Pod "webserver-deployment-595b5b9587-c97fn" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-c97fn webserver-deployment-595b5b9587- deployment-5084 /api/v1/namespaces/deployment-5084/pods/webserver-deployment-595b5b9587-c97fn d555c884-3c97-4526-9f69-6aac3714af68 5418383 0 2020-04-04 18:59:21 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 faaa53f8-3ac4-4845-9281-f8661f19e840 0xc006c43a87 0xc006c43a88}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-srndk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-srndk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-srndk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 18:59:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 18:59:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 18:59:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 18:59:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.2.48,StartTime:2020-04-04 18:59:21 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-04 18:59:29 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://85f3530698fbdbc9dbd1dbfd22b94952b87cd797e5c93fe252fc023d8bdcf8e9,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.48,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 4 18:59:37.204: INFO: Pod "webserver-deployment-595b5b9587-ch2jf" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-ch2jf webserver-deployment-595b5b9587- deployment-5084 /api/v1/namespaces/deployment-5084/pods/webserver-deployment-595b5b9587-ch2jf 03ed326f-8eb5-4c0a-b19c-c3ccc92189c6 5418364 0 2020-04-04 18:59:21 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 faaa53f8-3ac4-4845-9281-f8661f19e840 0xc006c43c07 0xc006c43c08}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-srndk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-srndk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-srndk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 18:59:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 18:59:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 18:59:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 18:59:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.1.118,StartTime:2020-04-04 18:59:21 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-04 18:59:28 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://e03b34f3191e0b6f439fb270b8682523e686ef25b32def0bb370c36e279d5321,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.118,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 4 18:59:37.204: INFO: Pod "webserver-deployment-595b5b9587-crjvg" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-crjvg webserver-deployment-595b5b9587- deployment-5084 /api/v1/namespaces/deployment-5084/pods/webserver-deployment-595b5b9587-crjvg 8acbf20b-a51e-46e9-888a-4e1bfa70223c 5418481 0 2020-04-04 18:59:35 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 faaa53f8-3ac4-4845-9281-f8661f19e840 0xc006c43d87 0xc006c43d88}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-srndk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-srndk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-srndk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 18:59:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 4 18:59:37.204: INFO: Pod "webserver-deployment-595b5b9587-dddpx" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-dddpx webserver-deployment-595b5b9587- deployment-5084 /api/v1/namespaces/deployment-5084/pods/webserver-deployment-595b5b9587-dddpx bec10f7e-530d-4563-b1d4-bb41d7282ae2 5418363 0 2020-04-04 18:59:21 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 faaa53f8-3ac4-4845-9281-f8661f19e840 0xc006c43ea7 0xc006c43ea8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-srndk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-srndk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-srndk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 18:59:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 18:59:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 18:59:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 18:59:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.2.47,StartTime:2020-04-04 18:59:21 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-04 18:59:28 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://5fc1942e6aab3c2a59b95436f0c29f178714b7daf5e52153f7fea7224760405b,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.47,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 4 18:59:37.205: INFO: Pod "webserver-deployment-595b5b9587-dg4pq" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-dg4pq webserver-deployment-595b5b9587- deployment-5084 /api/v1/namespaces/deployment-5084/pods/webserver-deployment-595b5b9587-dg4pq 5eaac451-43cf-4b87-a8b2-5dbcfc68048f 5418540 0 2020-04-04 18:59:35 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 faaa53f8-3ac4-4845-9281-f8661f19e840 0xc006c72037 0xc006c72038}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-srndk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-srndk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-srndk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 18:59:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 18:59:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 18:59:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 18:59:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-04-04 18:59:36 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 4 18:59:37.205: INFO: Pod "webserver-deployment-595b5b9587-djqb8" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-djqb8 webserver-deployment-595b5b9587- deployment-5084 /api/v1/namespaces/deployment-5084/pods/webserver-deployment-595b5b9587-djqb8 8558bb07-42b1-47eb-9f5a-4b422e2a99c0 5418376 0 2020-04-04 18:59:21 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 faaa53f8-3ac4-4845-9281-f8661f19e840 0xc006c72197 0xc006c72198}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-srndk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-srndk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-srndk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 18:59:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 18:59:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 18:59:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 18:59:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.1.120,StartTime:2020-04-04 18:59:21 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-04 18:59:29 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://e44e5fa9350a3325f09878bc1868aa7ba059c485247f0bceb4d5e97a312ea571,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.120,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 4 18:59:37.205: INFO: Pod "webserver-deployment-595b5b9587-grk26" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-grk26 webserver-deployment-595b5b9587- deployment-5084 /api/v1/namespaces/deployment-5084/pods/webserver-deployment-595b5b9587-grk26 fd3a4f4f-a199-4cd3-86a2-5a7ce9d2efbe 5418511 0 2020-04-04 18:59:36 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 faaa53f8-3ac4-4845-9281-f8661f19e840 0xc006c72317 0xc006c72318}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-srndk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-srndk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-srndk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 18:59:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 4 18:59:37.205: INFO: Pod "webserver-deployment-595b5b9587-jcndm" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-jcndm webserver-deployment-595b5b9587- deployment-5084 /api/v1/namespaces/deployment-5084/pods/webserver-deployment-595b5b9587-jcndm 7f04328e-a999-443a-ae43-d153a786fe6f 5418515 0 2020-04-04 18:59:36 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 faaa53f8-3ac4-4845-9281-f8661f19e840 0xc006c72437 0xc006c72438}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-srndk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-srndk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-srndk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 18:59:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 4 18:59:37.205: INFO: Pod "webserver-deployment-595b5b9587-nfkjx" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-nfkjx webserver-deployment-595b5b9587- deployment-5084 /api/v1/namespaces/deployment-5084/pods/webserver-deployment-595b5b9587-nfkjx 7d85da41-e7b1-402b-9a2b-d991e4ab6177 5418356 0 2020-04-04 18:59:21 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 faaa53f8-3ac4-4845-9281-f8661f19e840 0xc006c72557 0xc006c72558}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-srndk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-srndk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-srndk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 18:59:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 18:59:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 18:59:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 18:59:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.1.119,StartTime:2020-04-04 18:59:21 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-04 18:59:28 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://750c757a5dde7f8ffd424c9d985dc6f57b4a4cc3140691e3bd25e2be69747175,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.119,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 4 18:59:37.205: INFO: Pod "webserver-deployment-595b5b9587-pr8gv" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-pr8gv webserver-deployment-595b5b9587- deployment-5084 /api/v1/namespaces/deployment-5084/pods/webserver-deployment-595b5b9587-pr8gv 7766f84e-accf-413e-8785-ae9b1a1c85d0 5418308 0 2020-04-04 18:59:21 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 faaa53f8-3ac4-4845-9281-f8661f19e840 0xc006c726d7 0xc006c726d8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-srndk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-srndk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-srndk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 18:59:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 18:59:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 18:59:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 18:59:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:10.244.1.117,StartTime:2020-04-04 18:59:21 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-04 18:59:24 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://da326edf6c0be7071ddbb4f62441050ceec0188ec3fb637ebafc961ad9d41e2d,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.117,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 4 18:59:37.206: INFO: Pod "webserver-deployment-595b5b9587-rjgrx" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-rjgrx webserver-deployment-595b5b9587- deployment-5084 /api/v1/namespaces/deployment-5084/pods/webserver-deployment-595b5b9587-rjgrx 3c9c2410-f769-42a8-948d-b6e9f706d2dd 5418499 0 2020-04-04 18:59:36 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 faaa53f8-3ac4-4845-9281-f8661f19e840 0xc006c72857 0xc006c72858}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-srndk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-srndk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-srndk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 18:59:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 4 18:59:37.206: INFO: Pod "webserver-deployment-595b5b9587-zqhlw" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-zqhlw webserver-deployment-595b5b9587- deployment-5084 /api/v1/namespaces/deployment-5084/pods/webserver-deployment-595b5b9587-zqhlw ff89e7d7-4bff-4b04-bd53-410236cb0cf3 5418320 0 2020-04-04 18:59:21 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 faaa53f8-3ac4-4845-9281-f8661f19e840 0xc006c72977 0xc006c72978}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-srndk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-srndk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-srndk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 18:59:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 18:59:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 18:59:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 18:59:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:10.244.2.45,StartTime:2020-04-04 18:59:21 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-04 18:59:25 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://2f7255733381266714bf5a0ce87754fb489cdbd477eeabaef14fbabb5e770468,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.45,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 4 18:59:37.206: INFO: Pod "webserver-deployment-c7997dcc8-2fdm5" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-2fdm5 webserver-deployment-c7997dcc8- deployment-5084 /api/v1/namespaces/deployment-5084/pods/webserver-deployment-c7997dcc8-2fdm5 5ab84864-d472-41b1-98a5-13309f8481ab 5418456 0 2020-04-04 18:59:33 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 a0b6c7fb-0c79-4d66-a4cd-2ee58cabb86c 0xc006c72af7 0xc006c72af8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-srndk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-srndk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-srndk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 18:59:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 18:59:33 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 18:59:33 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 18:59:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-04-04 18:59:33 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 4 18:59:37.206: INFO: Pod "webserver-deployment-c7997dcc8-2j9tp" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-2j9tp webserver-deployment-c7997dcc8- deployment-5084 /api/v1/namespaces/deployment-5084/pods/webserver-deployment-c7997dcc8-2j9tp 794f9c8e-df2a-4cbc-b90d-3f884fbf8753 5418517 0 2020-04-04 18:59:36 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 a0b6c7fb-0c79-4d66-a4cd-2ee58cabb86c 0xc006c72c77 0xc006c72c78}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-srndk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-srndk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-srndk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 18:59:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 4 18:59:37.206: INFO: Pod "webserver-deployment-c7997dcc8-75s8k" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-75s8k webserver-deployment-c7997dcc8- deployment-5084 /api/v1/namespaces/deployment-5084/pods/webserver-deployment-c7997dcc8-75s8k 85a437a3-2286-4bb3-bdf3-b22245c0d619 5418450 0 2020-04-04 18:59:33 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 a0b6c7fb-0c79-4d66-a4cd-2ee58cabb86c 0xc006c72da7 0xc006c72da8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-srndk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-srndk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-srndk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 18:59:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 18:59:33 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 18:59:33 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 18:59:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-04-04 18:59:33 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 4 18:59:37.207: INFO: Pod "webserver-deployment-c7997dcc8-86z2x" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-86z2x webserver-deployment-c7997dcc8- deployment-5084 /api/v1/namespaces/deployment-5084/pods/webserver-deployment-c7997dcc8-86z2x 2dd01ed2-45bf-40e6-951f-a947394a4edc 5418448 0 2020-04-04 18:59:33 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 a0b6c7fb-0c79-4d66-a4cd-2ee58cabb86c 0xc006c72f27 0xc006c72f28}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-srndk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-srndk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-srndk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 18:59:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 18:59:33 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 18:59:33 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 18:59:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-04-04 18:59:33 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 4 18:59:37.207: INFO: Pod "webserver-deployment-c7997dcc8-8lkxn" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-8lkxn webserver-deployment-c7997dcc8- deployment-5084 /api/v1/namespaces/deployment-5084/pods/webserver-deployment-c7997dcc8-8lkxn b8e1aebf-8807-40fb-acbd-43e03513f59e 5418434 0 2020-04-04 18:59:33 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 a0b6c7fb-0c79-4d66-a4cd-2ee58cabb86c 0xc006c730a7 0xc006c730a8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-srndk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-srndk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-srndk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 18:59:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 18:59:33 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 18:59:33 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 18:59:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.13,PodIP:,StartTime:2020-04-04 18:59:33 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 4 18:59:37.207: INFO: Pod "webserver-deployment-c7997dcc8-gncb8" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-gncb8 webserver-deployment-c7997dcc8- deployment-5084 /api/v1/namespaces/deployment-5084/pods/webserver-deployment-c7997dcc8-gncb8 9e231909-1210-40c3-aea6-240bf4474ef8 5418479 0 2020-04-04 18:59:35 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 a0b6c7fb-0c79-4d66-a4cd-2ee58cabb86c 0xc006c73227 0xc006c73228}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-srndk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-srndk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-srndk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 18:59:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 4 18:59:37.207: INFO: Pod "webserver-deployment-c7997dcc8-jmv4b" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-jmv4b webserver-deployment-c7997dcc8- deployment-5084 /api/v1/namespaces/deployment-5084/pods/webserver-deployment-c7997dcc8-jmv4b a271ebb8-cf47-4507-acde-7681d39630e8 5418428 0 2020-04-04 18:59:33 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 a0b6c7fb-0c79-4d66-a4cd-2ee58cabb86c 0xc006c73357 0xc006c73358}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-srndk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-srndk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-srndk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 18:59:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 18:59:33 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 18:59:33 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 18:59:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.12,PodIP:,StartTime:2020-04-04 18:59:33 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 4 18:59:37.207: INFO: Pod "webserver-deployment-c7997dcc8-jrgmz" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-jrgmz webserver-deployment-c7997dcc8- deployment-5084 /api/v1/namespaces/deployment-5084/pods/webserver-deployment-c7997dcc8-jrgmz dff532d4-53b0-4460-8fad-4f53c17c6314 5418514 0 2020-04-04 18:59:36 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 a0b6c7fb-0c79-4d66-a4cd-2ee58cabb86c 0xc006c734d7 0xc006c734d8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-srndk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-srndk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-srndk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 18:59:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 4 18:59:37.207: INFO: Pod "webserver-deployment-c7997dcc8-rpr6n" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-rpr6n webserver-deployment-c7997dcc8- deployment-5084 /api/v1/namespaces/deployment-5084/pods/webserver-deployment-c7997dcc8-rpr6n 8f8806ef-6dd8-4678-b152-a5d68b489bec 5418500 0 2020-04-04 18:59:36 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 a0b6c7fb-0c79-4d66-a4cd-2ee58cabb86c 0xc006c73607 0xc006c73608}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-srndk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-srndk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-srndk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 18:59:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 4 18:59:37.208: INFO: Pod "webserver-deployment-c7997dcc8-scvz4" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-scvz4 webserver-deployment-c7997dcc8- deployment-5084 /api/v1/namespaces/deployment-5084/pods/webserver-deployment-c7997dcc8-scvz4 a0f123b2-b45e-4d33-86da-fa843a8d7ff8 5418487 0 2020-04-04 18:59:36 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 a0b6c7fb-0c79-4d66-a4cd-2ee58cabb86c 0xc006c73737 0xc006c73738}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-srndk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-srndk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-srndk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 18:59:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 4 18:59:37.208: INFO: Pod "webserver-deployment-c7997dcc8-wfs6q" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-wfs6q webserver-deployment-c7997dcc8- deployment-5084 /api/v1/namespaces/deployment-5084/pods/webserver-deployment-c7997dcc8-wfs6q ca3088cf-6623-41f5-8fa5-9f1e344b6690 5418526 0 2020-04-04 18:59:36 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 a0b6c7fb-0c79-4d66-a4cd-2ee58cabb86c 0xc006c73867 0xc006c73868}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-srndk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-srndk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-srndk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 18:59:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 4 18:59:37.208: INFO: Pod "webserver-deployment-c7997dcc8-wq5nq" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-wq5nq webserver-deployment-c7997dcc8- deployment-5084 /api/v1/namespaces/deployment-5084/pods/webserver-deployment-c7997dcc8-wq5nq 11812d62-5267-45f9-840a-978bc855f41d 5418516 0 2020-04-04 18:59:36 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 a0b6c7fb-0c79-4d66-a4cd-2ee58cabb86c 0xc006c73997 0xc006c73998}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-srndk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-srndk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-srndk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 18:59:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 4 18:59:37.208: INFO: Pod "webserver-deployment-c7997dcc8-xjfj9" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-xjfj9 webserver-deployment-c7997dcc8- deployment-5084 /api/v1/namespaces/deployment-5084/pods/webserver-deployment-c7997dcc8-xjfj9 1feb6a45-588f-4660-8722-7a95038c257a 5418518 0 2020-04-04 18:59:36 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 a0b6c7fb-0c79-4d66-a4cd-2ee58cabb86c 0xc006c73ac7 0xc006c73ac8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-srndk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-srndk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-srndk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:latest-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-04 18:59:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:59:37.208: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-5084" for this suite. • [SLOW TEST:16.355 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":281,"completed":280,"skipped":4673,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Apr 4 18:59:37.416: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-02767861-e76f-46f6-8a0f-bf4a614e52e9 STEP: Creating a pod to test consume configMaps Apr 4 18:59:38.878: INFO: Waiting up to 5m0s for pod "pod-configmaps-10ec4eb1-ba2a-4de5-a80d-92e9f02d8e9a" in namespace "configmap-7254" to be "Succeeded or Failed" Apr 4 18:59:39.234: INFO: Pod "pod-configmaps-10ec4eb1-ba2a-4de5-a80d-92e9f02d8e9a": Phase="Pending", Reason="", readiness=false. Elapsed: 356.344062ms Apr 4 18:59:41.410: INFO: Pod "pod-configmaps-10ec4eb1-ba2a-4de5-a80d-92e9f02d8e9a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.531710671s Apr 4 18:59:43.727: INFO: Pod "pod-configmaps-10ec4eb1-ba2a-4de5-a80d-92e9f02d8e9a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.849171589s Apr 4 18:59:45.940: INFO: Pod "pod-configmaps-10ec4eb1-ba2a-4de5-a80d-92e9f02d8e9a": Phase="Pending", Reason="", readiness=false. Elapsed: 7.062336349s Apr 4 18:59:48.345: INFO: Pod "pod-configmaps-10ec4eb1-ba2a-4de5-a80d-92e9f02d8e9a": Phase="Pending", Reason="", readiness=false. Elapsed: 9.466712775s Apr 4 18:59:50.452: INFO: Pod "pod-configmaps-10ec4eb1-ba2a-4de5-a80d-92e9f02d8e9a": Phase="Pending", Reason="", readiness=false. Elapsed: 11.573904236s Apr 4 18:59:52.697: INFO: Pod "pod-configmaps-10ec4eb1-ba2a-4de5-a80d-92e9f02d8e9a": Phase="Running", Reason="", readiness=true. Elapsed: 13.819463116s Apr 4 18:59:54.853: INFO: Pod "pod-configmaps-10ec4eb1-ba2a-4de5-a80d-92e9f02d8e9a": Phase="Running", Reason="", readiness=true. Elapsed: 15.974979221s Apr 4 18:59:56.857: INFO: Pod "pod-configmaps-10ec4eb1-ba2a-4de5-a80d-92e9f02d8e9a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 17.979339303s STEP: Saw pod success Apr 4 18:59:56.857: INFO: Pod "pod-configmaps-10ec4eb1-ba2a-4de5-a80d-92e9f02d8e9a" satisfied condition "Succeeded or Failed" Apr 4 18:59:56.861: INFO: Trying to get logs from node latest-worker pod pod-configmaps-10ec4eb1-ba2a-4de5-a80d-92e9f02d8e9a container configmap-volume-test: STEP: delete the pod Apr 4 18:59:56.900: INFO: Waiting for pod pod-configmaps-10ec4eb1-ba2a-4de5-a80d-92e9f02d8e9a to disappear Apr 4 18:59:56.912: INFO: Pod pod-configmaps-10ec4eb1-ba2a-4de5-a80d-92e9f02d8e9a no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Apr 4 18:59:56.912: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7254" for this suite. • [SLOW TEST:19.503 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":281,"completed":281,"skipped":4700,"failed":0} SSSSSSSSSSSSSSSSApr 4 18:59:56.919: INFO: Running AfterSuite actions on all nodes Apr 4 18:59:56.919: INFO: Running AfterSuite actions on node 1 Apr 4 18:59:56.919: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/k8s_conformance/junit_01.xml {"msg":"Test Suite completed","total":281,"completed":281,"skipped":4716,"failed":0} Ran 281 of 4997 Specs in 6360.222 seconds SUCCESS! -- 281 Passed | 0 Failed | 0 Pending | 4716 Skipped PASS